123
 123

Tip: 看不到本站引用 Flickr 的图片? 下载 Firefox Access Flickr 插件 | AD: 订阅 DBA notes --

2017-07-23 Sun

00:10 In Memoriam – 2 (1 Bytes) » Oracle Scratchpad
A

2017-07-22 Sat

02:32 Oracle Scheduler Fail To Send Email Notifications (10384 Bytes) » Official Pythian Blog

In this blog post I would like to share an interesting issue we encountered couple of months ago related to scheduler job email notifications. As some of you may know, starting with Oracle 11.2 you can subscribe to receive notification emails from a scheduler job. You can define an email to be sent on various job events (job_started, job_completed, job_failed etc.). The job email notification is defined with DBMS_SCHEDULER.ADD_JOB_EMAIL_NOTIFICATION procedure.

I am assuming you already have a configured and working SMTP server. If not, that can be done with DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE (attributes: email_server and email_sender).

The issue we encountered was on database version 12.1. After configuring the scheduler jobs and email notification lists, emails were not sent out.

This blog post should give you some guidance on how you can troubleshoot and properly define email job notifications.

The problem:

In our case, we used one “system” account to manage the job email notification subscription. Some of you might use the same approach, having a single and separate account used to manage job email notifications for all other accounts.

Let’s assume we have a job called JOB1 defined in schema IARSOV.

exec dbms_scheduler.create_job(job_name => 'JOB1', job_type => 'PLSQL_BLOCK', job_action => 'begin null; end;');

PL/SQL procedure successfully completed.

If we now try to add the a job notification email for IARSOV.JOB1 with the SYSTEM user we should receive an error: “ORA-24093: AQ agent SCHED$_AGT2$_xx not granted privileges of database user”.

exec dbms_scheduler.add_job_email_notification('IARSOV.JOB1','arsov@pythian.com');

BEGIN dbms_scheduler.add_job_email_notification('IARSOV.JOB1','arsov@pythian.com'); END;

*
ERROR at line 1:
ORA-24093: AQ agent SCHED$_AGT2$_101 not granted privileges of database user
SYSTEM
ORA-06512: at "SYS.DBMS_ISCHED", line 7847
ORA-06512: at "SYS.DBMS_SCHEDULER", line 4063
ORA-06512: at line 1

As a workaround we can grant the necessary privileges with the DBMS_AQADM.ENABLE_DB_ACCESS package used for managing Oracle Database Advanced Queuing (AQ).

exec DBMS_AQADM.ENABLE_DB_ACCESS(agent_name  => 'SCHED$_AGT2$_101', db_username => 'SYSTEM');

PL/SQL procedure successfully completed.

We can confirm the granted privileges via the DBA_AQ_AGENT_PRIVS dictionary view (Line 11):

set lines 120
col agent_name for a40
col db_username for a40

select * from dba_aq_agent_privs;

AGENT_NAME                     DB_USERNAME                    HTTP SMTP
------------------------------ ------------------------------ ---- ----
DB12C_3938_ORCL11G             DBSNMP                         NO   NO
SCHED$_AGT2$_101               IARSOV                         NO   NO
SCHED$_AGT2$_101               SYSTEM                         NO   NO
SCHEDULER$_EVENT_AGENT         SYS                            NO   NO
SCHEDULER$_REMDB_AGENT         SYS                            NO   NO
SERVER_ALERT                   SYS                            NO   NO
HAE_SUB                                                       NO   NO

7 rows selected.

Let’s now try to define job email notification for IARSOV.JOB1:

exec dbms_scheduler.add_job_email_notification('IARSOV.JOB1','arsov@pythian.com');

PL/SQL procedure successfully completed.

set pages 200
set lines 200
col owner for a40
col job_name for a40
col recipient for a20
select owner, job_name, recipient, event from dba_scheduler_notifications where job_name = 'JOB1';

OWNER                          JOB_NAME                       RECIPIENT            EVENT
------------------------------ ------------------------------ -------------------- -------------------
IARSOV                         JOB1                           arsov@pythian.com    JOB_FAILED
IARSOV                         JOB1                           arsov@pythian.com    JOB_BROKEN
IARSOV                         JOB1                           arsov@pythian.com    JOB_SCH_LIM_REACHED
IARSOV                         JOB1                           arsov@pythian.com    JOB_CHAIN_STALLED
IARSOV                         JOB1                           arsov@pythian.com    JOB_OVER_MAX_DUR

The notification has been successfully defined, however, upon testing the events email was not sent. In our case, the events were ending up in the exception queue AQ$_SCHEDULER$_EVENT_QTAB_E and there was not much information we could derive from the AQ$ scheduler related tables.

Troubleshooting:

The DBA_SUBSCR_REGISTRATIONS view contains mapping definitions for each schema showing which event_queue:consumer_group it is subscribed to. If we check the subscription definition for IARSOV user we can see the event_queue:consumer_group is linked to USER# 5 which is the SYSTEM user. In this case IARSOV’s AQ agent SCHED$_AGT2$_101 is linked to wrong user.

What we’re interested in is the association between SUBSCRIPTION_NAME and USER# columns.

SQL> select subscription_name, user#, status from dba_subscr_registrations;

SUBSCRIPTION_NAME                                                                     USER# STATUS
-------------------------------------------------------------------------------- ---------- --------
"SYS"."SCHEDULER$_EVENT_QUEUE":"SCHED$_AGT2$_101"                                         5 DB REG


SQL> select username, user_id from dba_users where user_id = 5;

USERNAME                          USER_ID
------------------------------ ----------
SYSTEM                                  5

In this case the emails won’t be sent out because the subscription registration is not properly initialized (linked) with the correct user (schema). In order for the notifications to work we need the proper link between the agent and the agent’s owner. In this case “SYS”.”SCHEDULER$_EVENT_QUEUE”:”SCHED$_AGT2$_101″ and the IARSOV schema should be properly linked – notice that the user’s ID is also part of the agent name.

What we now need to do is to drop all job email notifications (in this case only one) for IARSOV jobs. When dropping the last job email notification the subscription registration will be removed from DBA_SUBSCR_REGISTRATIONS.
However, note that you have to drop the job email notifications as the user to which the subscription registration is defined, in this case the SYSTEM user.

Hint: If you don’t know the password for the schema you need to connect to, you can can use the Proxy Authenticated Connection feature as documented in the blog article The power of the Oracle Database “proxy authenticated” connections.

SQL> show user;

USER is "SYSTEM"

SQL>
SQL> exec dbms_scheduler.remove_job_email_notification('IARSOV.JOB1','arsov@pythian.com');

PL/SQL procedure successfully completed.

SQL> select subscription_name, user#, status from dba_subscr_registrations;

no rows selected

Once we clear the subscription, we can properly initialize the link by adding the first job notification with the job schema’s owner. This will properly initialize the event_queue:consumer_group with the correct user. After that we can add multiple job notifications from other users as long as we have appropriate privileges granted.

--as SYSTEM user.

SQL> show user;
USER is "SYSTEM"
SQL>
SQL> select subscription_name, user#, status from dba_subscr_registrations;

SUBSCRIPTION_NAME                                       USER# STATUS
-------------------------------------------------- ---------- --------
"SYS"."SCHEDULER$_EVENT_QUEUE":"ILM_AGENT"                  0 DB REG

SQL>

--as IARSOV user.

SQL> show user;
USER is "IARSOV"
SQL>
SQL> exec DBMS_SCHEDULER.add_job_email_notification (job_name => 'IARSOV.JOB1', recipients => 'arsov@pyhian.com');

PL/SQL procedure successfully completed.

SQL>
SQL>

--as SYSTEM user.

SQL> show user;
USER is "SYSTEM"
SQL>
SQL> select subscription_name, user#, status from dba_subscr_registrations;

SUBSCRIPTION_NAME                                       USER# STATUS
-------------------------------------------------- ---------- --------
"SYS"."SCHEDULER$_EVENT_QUEUE":"ILM_AGENT"                  0 DB REG
"SYS"."SCHEDULER$_EVENT_QUEUE":"SCHED$_AGT2$_101"         101 DB REG

SQL>
SQL>
SQL> select username from dba_users where user_id = 101;

USERNAME
----------------
IARSOV

SQL>

Conclusion:

If you decide to use scheduler job email notifications, and also prefer that the notifications management is done by a single user (such as “system”) I would advise you to create a “dummy” job notification (with the job owner’s schema) as soon as you create the first scheduler job. This will link the event_queue:consumer_group to the proper user. Afterwards, once you define the rest of the scheduler job notifications (under the common “system” user), you can clean-up that initial “dummy” job notification.

This behavior (bug) is fixed in 12.2 so that the notifications always go through the user’s AQ agent which defines the notifications.

02:03 Words I Don’t Use, Part 1: “Methodology” (1775 Bytes) » Cary Millsap
Today, I will begin a brief sequence of blog posts that I’ll call “Words I Don’t Use.” I hope you’ll have some fun with it, and maybe find a thought or two worth considering.

The first word I’ll discuss is methodology. Yes. I made a shirt about it.

Approximately 100% of the time in the [mostly non-scientific] Oracle world that I live in, when people say “methodology,” they’re using it in the form that American Heritage describes as a pretentious substitute for “method.” But methodology is not the same as method. Methods are processes. Sequences of steps. Methodology is the scientific study of methods.

I like this article called “Method versus Methodology” by Peter Klein, which cites the same American Heritage Dictionary passage that I quoted on page 358 of Optimizing Oracle Performance.
00:47 Log Buffer #517: A Carnival of the Vanities for DBAs (2899 Bytes) » Official Pythian Blog

This Log Buffer Edition covers Oracle, SQL Server and MySQL.

Oracle:

Protecting Financial Data with Oracle WebCenter and Adobe LiveCycle

Oracle Forms 12c oracle.security.jps.JpsException Error after Database change

The Future of Content Management: Oracle Content & Experience Cloud

Today Oracle released a very large „monster“ Upgrade. This July 2017 Update includes the first time the new RU „Release Upgrade“ and RUR „Release Update Revision“ Patches.

Cloud Ward: Who Will Win the Battle for IT’s Future?

SQL Server:

SQL Server Management Studio add-ins

Resolve Network Binding Order Warning in failover cluster

Queries to inventory your SQL Server Agent Jobs

SQL Server 2016 ColumnStore Index String Predicate Pushdown

The Fast Route from Raw Data to Analysis and Display

MySQL:

Group-Replication, sweet & sour

You QA Most of Your System — What About Your Database?

Multi-Threaded Slave Statistics

Protecting your data! Fail-safe enhancements to Group Replication.

InnoDB Basics – Compaction: when and when not

2017-07-21 Fri

23:57 How to hold a great team meeting with remote employees (3038 Bytes) » Official Pythian Blog

I want to talk about something that I initially did very poorly when I first became a remote manager in hopes you can learn from my mistakes – Team Meetings.

When I started having my first team meetings as a team manager, I would compile & organize notes, assemble the team and talk at them for 20-45 minutes. I’d give relevant updates and at the end, I’d ask for issues and questions. The team rarely brought up topics for discussion, and likely surfed the internet most of the time. We’d adjourn the meeting, I’d walk away from my desk, feeling drained, desperately in search of fresh coffee. The meetings were held in the early morning, so you can imagine how the rest of my day went.

Luckily for my team, I’ve learned a lot since then. Here’s how I do it today.

We meet weekly on Google Hangouts. I still spend the week compiling notes and an agenda. The call is almost always opened 5-10 minutes early.

We assemble and chat for a few minutes before we start as we wait for everyone to join.

Once we have the entire team online, I get things rolling. I give a short update of anything relevant, then I move on to tell them what I’ve been working on for the week. This is kept fairly high level, and sensitive matters are not shared. I am candid though – if I’ve had a rough week, I don’t hide it. Once my update is finished, I turn the meeting over to the team.

Each member is called on to update the team on what they’ve been working on that week, how things have been going for them, etc. It’s not uncommon for someone to bring up a problem they’ve been working on, something that frustrates them or something they’ve been building. The team is encouraged to ask questions of each other and they often do.

Sometimes I do need to encourage things to be taken offline, and they are.

We can usually get through the whole meeting (11 people) in about 30 minutes.

In the remaining half hour I will sometimes bring in a guest – an executive or someone else in the company – who is working on something that the team wants to see. Suggestions are encouraged. If there isn’t a guest, we will sometimes talk in-depth about a technical issue (this part of the meeting becomes optional and I encourage only interested parties to stick around).

If there isn’t anything further, I thank them for their contributions to Pythian that week and wish them well.

Direct and indirect feedback received so far indicates that our meetings are very good and that the team enjoys them.

These days, almost without fail, I leave the meeting feeling energized, thinking about what a strong team I have and proud of their achievements. Most importantly, I think they do too.

03:04 Machine Learning on the Cloud: Notes for the Layman (6014 Bytes) » Official Pythian Blog
What is Machine Learning?

Machine learning is an assortment of tools and techniques for predicting outcomes and classifying events, based on a set of interactions between variables (also referred to as features or attributes) belonging to a particular data set. Today, we see the increased democratization of machine learning technology due to the proliferation of software packages that encapsulate the mathematical techniques that form the kernel of machine learning into application programming interfaces (APIs).

A new wave of amazing software/ analytic products making use of machine learning and in extension deep learning technology is the direct consequence of moving machine learning from the mathematical space into the software engineering domain by packaging and open-sourcing of software libraries.

A Word on Deep Learning

Deep learning is a particular set or suite of machine learning algorithms called “Neural Networks” with major applications in computer vision (e.g. object recognition, image captioning), natural language processing, audio translation and other classification tasks such as document segmentation, to mention just a few. The following three extensions of neural networks form the core of deep learning:

  1. Multi-layer Perceptrons (MLP)
  2. Convolutional Neural Networks (CNN), and
  3. Recurrent Neural Networks (RNN).

It is important to note that Deep Learning is the primary technology involved in self-driving cars.

Enter Big Data

The concept of big data is continuously evolving. Previously Gigabytes of data was considered significant, but nowadays we are in the era where the order of magnitude of data sizes is exponentially increasing. The availability of big data spurned the gains and success of machine learning and deep learning. To put it simply there is no learning without data.


There is no learning without data.
Click To Tweet


Computational Challenges of Learning

A significant challenge involved in training computers to learn using machine learning/ deep learning (ML/ DL) is computational power. Running a suite of an experiment on a decent CPU (e.g. a QuadCore i7, with 8GB RAM) can take upwards of 3 hours to days and even weeks for the algorithms to converge and produce a result set.

This computational lag is especially dire because getting a decent result requires several iterations of experiments either to tune the different parameters of the algorithm or to carry out various forms of feature engineering to achieve the desired classifier/ model that generalizes “optimally” to new examples.

The cost of on-premise high-end machines may be untenable for an aspiring ML/ DL practitioner, researcher, or amateur enthusiast. Moreover, the technical operations skill set required to build a cluster of commodity machines running Hadoop might be overwhelming and even sometimes a distraction for the casual user who just wants to delve into the nitty gritty of ML/ DL and explore.

The Cloud to the Rescue

The Cloud. No, it is not the bright blue curtain over our heads with packs of thick white sheets arrayed in brillant formations.

The cloud is a terminology that describes large sets of computers that are networked together in groups called data centers. These data centers are often distributed across multiple geographical locations. The size of a group, for example, is over 100, 000 sq ft (and those are the smaller sizes!).

Big companies like Google, Microsoft, Amazon & IBM, have large data centers that they are provisioning to the public (i.e. both enterprise and personal users) for use at a very reasonable cost.

Cloud technology/ infrastructure are allowing individuals to leverage the computing resources of big business for ML/ DL experimentation, design and development. For example, one can make use of Google’s resources via Google Cloud Platform (GCP) or Amazon’s resources via Amazon Web Services (AWS), or Microsoft’s Azure platform to run a suite of algorithms with multiple test grids for approx. 10 minutes, whereas such a series will take over 10 hours or more on a local device. Such is the power of the cloud.

Instead of running on a quad core machine for several hours, if not days, we can leverage thousands of cores to perform the same task for a short period and relinquish these resources after completing the job.

Another key advantage of using the cloud for ML/ DL is the cost effectiveness. Imagine the cost of purchasing a high-end computational machine, which may or may not b performing the actual job of high-end supercomputing all the time. Alternatively, even consider the “cost” (both time and otherwise) of setting up an on-premise Hadoop infrastructure, which will need constant technical operations attention, and there is the danger of spending more time doing operations than actual analytics processing.

In all the scenarios presented, the cloud comes to the rescue, where thousands of CPUs are available on-demand for turbocharged computing at a very affordable price. The principle is, use the resources needed to get the job done, and relinquish them after use.

Note: This was originally published in ekababisong.org

2017-07-19 Wed

03:37 Video: Make data part of your company’s DNA with Kick AaaS (1532 Bytes) » Official Pythian Blog

Wouldn’t your job be a lot easier if you could overcome data silos to get better insights on everything from operational efficiency, to marketing performance, to your customers’ experience? What if you could gather enough data in one place for meaningful data science exploration and machine learning? Kick Analytics as a Service (AaaS) from Pythian enables these capabilities and more.

Kick AaaS is a fully-managed, cloud-based analytics platform, delivered as a monthly service. It’s designed to let you gain insights from both internal and external data — all securely and at scale. And because it’s delivered by a team of business-savvy big data and analytics experts, it’s always completely customized for your unique use cases and data sources.

The platform brings together data from multiple sources and makes it available to different users and systems. It can help you quickly get access to actionable insights based on a 360 degree view of your customers, or optimize your applications, or even help you develop new products. Enabling more users to access more data, more easily, can make data part of your company’s DNA.

Watch the video (above) to learn how Kick AaaS can help users across your business get more from your data.