123
 123

Tip: 看不到本站引用 Flickr 的图片? 下载 Firefox Access Flickr 插件 | AD: 订阅 DBA notes --

2017-11-19 Sun

09:52 劳力士高仿手表配饰中的最佳选择 (1674 Bytes) » dba on unix

奢侈品是很多人的追求,随着社会发展的速度,奢侈品的款式也不断的更新,更新的速度也加快。在生活中有些人奢侈品是必须有的,可是面对生活的压力和奢侈品昂贵的价格,无法承担,这可怎么办,劳力士高仿手表就能解决这些问题。

劳力士高仿手表的价格实惠,只需大牌手表价格的零头,不论什么阶层的人都能接受,有些人担心价格实惠无法达到想要的效果,其实这些都是多余的,因为这款手表的的高仿程度能达到1:1的程度,在每个细小的环节上都非常精准,与大品牌手表做对比,不是专业人士无法辨别,戴着这款高仿手表出席任何场合,都可以放心。
有些人为了给自己争面子,选择带一些配饰,在配饰中劳力士高仿手表是最佳的选择,这款手表高端大气,与什么衣服都能配,不论是休闲还是时装的衣服都能配。出席什么样的场合也适合,不会显得土气和招摇,想要让自己的品味提高,这款手表就是最佳的选择。

李先生就是这款手表的佩戴者之一,他是一个企业的小老板,对这款手表非常看好,以前出去谈合作,合作方总是感觉我的公司没有实力,不与合作。通过朋友介绍,买了这款手表,每次出去谈合作,都很成功,这款手表真是不错。

2017-11-18 Sat

14:02 Complete spark 2.2.0 installation guide on windows 10. (23601 Bytes) » Developer 木匠
Goal
----
The step by step Apache Spark 2.2.0 installation guide on windows 10.


Steps
----
Download winutils.exe from git, https://github.com/steveloughran/winutils
E.g.: https://github.com/steveloughran/winutils/blob/master/hadoop-2.7.1/bin/winutils.exe

mkdir /tmp/hive

winutils.exe chmod -R 777 E:\tmp\hive
or
winutils.exe chmod -R 777 /tmp/hive

set HADOOP_HOME=E:\movie\spark\hadoop
mkdir %HADOOP_HOME%\bin
copy winutils.exe to %HADOOP_HOME%\bin

Download Spark: spark-2.2.0-bin-hadoop2.7.tgz from http://spark.apache.org/downloads.html
cd E:\movie\spark\
# in MINGW64 (git / cywin)
tar -zxvf spark-2.2.0-bin-hadoop2.7.tgz
# or use 7-Zip

cd E:\movie\spark\spark-2.2.0-bin-hadoop2.7
bin\pyspark.cmd



Notes
----
%HADOOP_HOME%\bin\winutils.exe must be locatable.

Folder "E:\movie\spark\hadoop" is just an example, it can be any folder.

Spark runs on Java 8+, Python 2.7+/3.4+ and R 3.1+.
http://spark.apache.org/docs/latest/

Reference
----
https://wiki.apache.org/hadoop/WindowsProblems


Here is the example output when start pyspark successfully:

E:\movie\spark\spark-2.2.0-bin-hadoop2.7>bin\pyspark.cmd

Python 3.6.1 (v3.6.1:69c0db5, Mar 21 2017, 18:41:36) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/11/17 19:07:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/11/17 19:07:36 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.2.0
      /_/

Using Python version 3.6.1 (v3.6.1:69c0db5, Mar 21 2017 18:41:36)
SparkSession available as 'spark'.
>>>
>>> textFile = spark.read.text("README.md")
17/11/17 19:08:03 WARN SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes
>>> textFile.count()
103

>>> textFile.select(explode(split(textFile.value, "\s+")).name("word")).groupBy("word").count().show()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NameError: name 'explode' is not defined

>>> from pyspark.sql.functions import *
>>> textFile.select(explode(split(textFile.value, "\s+")).name("word")).groupBy("word").count().show()
+--------------------+-----+
|                word|count|
+--------------------+-----+
|              online|    1|
|              graphs|    1|
|          ["Parallel|    1|
|          ["Building|    1|
|              thread|    1|
|       documentation|    3|
|            command,|    2|
|         abbreviated|    1|
|            overview|    1|
|                rich|    1|
|                 set|    2|
|         -DskipTests|    1|
|                name|    1|
|page](http://spar...|    1|
|        ["Specifying|    1|
|              stream|    1|
|                run:|    1|
|                 not|    1|
|            programs|    2|
|               tests|    2|
+--------------------+-----+
only showing top 20 rows

>>>

from pyspark.sql.functions import *


# module for pyspark,
from pyspark.sql import SparkSession
from pyspark.sql.window import Window
from pyspark.sql.functions import *
from pyspark.sql import *

Employee = Row("empno", "ename", "job", "mgr", "hiredate", "sal", "comm", "deptno")

emp1 = Employee(7369, "SMITH", "CLERK", 7902, "17-Dec-80", 800, 20, 10)
emp2 = Employee(7876, "ADAMS", "CLERK", 7788, "23-May-87", 1100, 0, 20)

df1 = sqlContext.createDataFrame([emp1, emp2])

sparkSession = SparkSession.builder.master("local").appName("Window Function").getOrCreate()

empDF = sparkSession.createDataFrame([
      Employee(7369, "SMITH", "CLERK", 7902, "17-Dec-80", 800, 20, 10),
      Employee(7499, "ALLEN", "SALESMAN", 7698, "20-Feb-81", 1600, 300, 30),
      Employee(7521, "WARD", "SALESMAN", 7698, "22-Feb-81", 1250, 500, 30),
      Employee(7566, "JONES", "MANAGER", 7839, "2-Apr-81", 2975, 0, 20),
      Employee(7654, "MARTIN", "SALESMAN", 7698, "28-Sep-81", 1250, 1400, 30),
      Employee(7698, "BLAKE", "MANAGER", 7839, "1-May-81", 2850, 0, 30),
      Employee(7782, "CLARK", "MANAGER", 7839, "9-Jun-81", 2450, 0, 10),
      Employee(7788, "SCOTT", "ANALYST", 7566, "19-Apr-87", 3000, 0, 20),
      Employee(7839, "KING", "PRESIDENT", 0, "17-Nov-81", 5000, 0, 10),
      Employee(7844, "TURNER", "SALESMAN", 7698, "8-Sep-81", 1500, 0, 30),
      Employee(7876, "ADAMS", "CLERK", 7788, "23-May-87", 1100, 0, 20)
])


partitionWindow = Window.partitionBy("deptno").orderBy(desc("empno"))
sumTest = sum("sal").over(partitionWindow)
empDF.select("*", sumTest.name("PartSum")).show()

partitionWindowRow = Window.partitionBy("deptno").orderBy(desc("sal")).rowsBetween(-1, 1)
sumTest = sum("sal").over(partitionWindowRow)
empDF.select("*", sumTest.name("PartSum")).orderBy("deptno").show()

2017-11-15 Wed

09:39 千丝万缕:Oracle扩展统计信息虚拟列引发OGG 1161错误 (2651 Bytes) » Oracle Life

作者:eygle 发布在 eygle.com

昨天有朋友在微信群提出一个问题,Oracle 12c GoldenGate 在复制时出现错误 OGG-01161 。

提示在 Trail 文件中,本应有 79 列,事实上出现了 93 列。

错误信息如下:

Bad column index (93) specified for table T_INITIAL_PREM, max columns = 79.

在检查数据表时,的确发现多出来很多列,这些列以 SYS_STS 开头,如果以 SYS_STS% 过滤,可以找到这些列:

WechatIMG12.png

那么这些列是怎么得来的?

查了一下文档确认:这是扩展统计信息生成的虚拟列 - Extended Stats Column 。

删除扩展统计信息,这些列就被清除了:

DBMS_STATS.DROP_EXTENDED_STATS(OWNNAME => 'MISBI',TABNAME => 'T_INITIAL_PREM',EXTENSION =>'("SALE_SVC_ID","SALECHNL","CTFLAG","CTVALIDATE","BANK_SELL_TYPE")');

这个案例提示我们,要注意学习Oracle的新特性,也要认真思考这些新特性可能带来的级联影响,按照六度关系理论,数据库中任何的修改都可能快速的关联到整个系统的核心稳定性上。

相关文章|Related Articles