We share our skills to maximize your revenue!
We share our skills to maximize your revenue!
When I was 6 I dreamed about dinosaurs – the bigger ones, like the T-Rex. I loved them. As I reached 16, I dreamed about the girl next door, and everything else was simply invisible. When I was 26, I was already a DBA of few years, going through the final years of being a rookie. I dreamed about becoming a true DBA. That’s a night dream and a day dream. A dream which I am still living day in and day out. As I have crossed 36, the journey of dreams go on.
I believe that if you want to be a champion DBA, you need to be a dreamer. What databases are today and what they would be tomorrow, will only be shaped by the database dreamers. Without these dreamers, these databases would be just a bland labor of bits and bytes. Dreamers live in an ideal world where they strive to create and nurture the things around them. So a dreamer’s database is the stainless vision of its lofty ideals.
A dreamer DBA cherishes his visions. A dreamer DBA only hears that music which stirs the chords of heart. A dreamer DBA only adopts what’s most beautiful for his database. He only drapes the configuration of his darling database with purest of parameters, for out of them grows all delightful conditions, a heavenly optimized environment, and when he remains true to it, an ideal database system gets created.
So dream big. Dream ideally, because a database dream is the prophecy which will dawn one day, and dawn it will.
Everything initially is a dream. Rose is a dream in a seed, initially. A Rainbow Lorikeet sits like a dream in an egg, for some time. At the start, every database is a bunch of files and processes. Let your dreams flourish and evolve positively, and channel them to become great implementations. From dreams, originate realities.
If you are a DBA, mired in crisis-hit databases; you only need to worry about if you don’t dream of improvement and resolution. If in difficult circumstances, you are dreaming about refinements, solutions, and elegance; only then you will be available to achieve that. To aspire is to achieve. To dream is to get. Dreams are kind of thoughts. Keep them positive and hopeful, your acts will get aligned accordingly, and you will be the victor; a true champion DBA.
If you are looking at today’s big names in databases and wondering in awe, that’s alright. But don’t attribute that to luck, fortune or chance. Don’t say that how lucky those cool DBA kids are, because they are not. They dreamed big, and then followed their dreams, and dreams enabled them do great things. They have their share of struggles and failures, but they persisted. They put in efforts and got the results. They were able to do that, because they didn’t want their dreams to crash.
You will become what your dreams are.
I have also spent time working with MySQL a relational database with open source roots, now part of Oracle. In this post I want to talk about my impressions of MySQL and some tips and tricks I learned working with it.
First and for all, you will have to be ready to get your hands dirty. I couldn't find a package structure for database administration in MySQL - like DBMS libraries of packages and procedures in Oracle. That means you will have to do most of the things on your own. Nevertheless good news is he database starts showing an Oracle banner when you login from version 5.0 onwards and some features like on-line Schema changes, more cost based optimisation and partitioning are added in versions 5.6 - a sign of good things to come.
Some key points
- Data Import/Export - You can use the native mysqldump utility to dump data with parameters, but it is slow. You can dump schemas and data. I couldn't get it to dump data fast (in parallel) though that is why I strongly recommend mydumper an open source utility written by guys in Oracle and MySQL to dump data using parallel threads and is very fast. Import can be done in parallel as well and it can give you that boost provided your hardware permits it. Don't try to disable constraints, drop indexes before imports as you will read in posts and suggestions on the net, mysqldump already does that for you.
- Hot Backup - mylvmbackup seems like the de-facto script to take hot backups when the database in online. There are tools like XtraBackup from Percona too. It takes a snapshot of the disk where your datafiles and logfiles are. At restore it does a crash recovery using the logs and brings the database transactions forwards to the point of crash. Then if you have the logs after that, you can play them forwards and bring the database to a point in time after the backup.
- Parallel processing - Nada, there is none! I couldn't get it do anything in parallel. The only thing I managed to do in parallel was to export and import data with mydumper, that works! So if you have many CPUs you will be watching them being idle most of the time as one thread only will be chugging away. Unless you use mydumper for your import/export operations where you can make those CPUs sweat.
- DBMS packages - You fancy automating, do you need scripts to do repetitive tasks? Well there is no DBMS package library to help you administer the database in MySQL. Instead, you can use Percona Toolkit scripts, a consultancy specialising in helping MySQL DBAs to do great work with MySQL databases. They have a variety of scripts from comparing (diff), syncing databases, tables to extracting metadata and GRANTS structures.
- Hints, Explain Plan, Performance Tuning. I couldn't see much of Cost Based Optimisation in MySQL, the data dictionary (INFORMATION_SCHEMA) has metadata names but doesn't hold any dynamic statistics about objects, estimates of counts of rows in tables and indexes it holds can be up 50% wrong. The whole thing is based on heuristics, I suppose. The EXPLAIN PLAN is just a row where it says what the optimiser will do, there is no cost analysis or logical tree structure of execution plans yet. I couldn't see much on Join orders either, no Nested Loops, HASH or MERGE joins yet.
MySQL is a popular, relational database. The free version of this database is probably what a small website and a start-up needs. But having said that, many sites outgrow MySQL and still stay with it.
Oracle will probably turn it to a serious database too. Adding partitioning, multi threading to it in the recent releases, is a step forwards in becoming an Enterprise size and scale database. I don't know much about the MySQL Cluster Version and MySQL Replication I know takes a load off from the reads. I want to see it doing more Performance Tuning science.
Top tools with MySQL that I used
MySQL Workbench - SQL IDE.
Mydumper - Fast logical backup and restore.
Mylvmbackup - Hot backup script
Pentaho Kettle - PDI is an all round data integration and middle-ware tool
You can find many resources about it on the Internet. I started to learn about Python by watching "Introduction to Python" By Jessica McKellar. It's good learning video help to start with Python.
It gives a lot of examples for Python and easy to learn. If you would like to start Python programming by your owner, starting with this and learn:
- Set up a development environment with Python and a text editor
- Explore basic data types such as integers, strings, lists, and dictionaries
- Learn how looping lets you do lots of work with a little bit of code
- Gain access to more functionality in Python with modules
- Practice reading, writing, and running your first Python programs
- Navigate the command line for writing larger programs
- Write your own functions for encapsulating useful work
- Use classes to group, name, and reuse functions and variables
- Practice what you’ve learned with the state capitals quizzer and Scrabble cheater projects
You are supposed to learn and get what are necessary for Python beginning by video course.
During study, you can follow up practice at link.Written By: Surachart Opun http://surachartopun.com
The Log Buffer carries on with the sparkling blog posts gathered from all across Oracle, SQL Server, and MySQL.
The VISIBILITY column in USER_INDEXES is used to check that each change has worked.
Oracle Utilities Application Framework V220.127.116.11.0 has been released with Oracle Utilities Customer Care and Billing V18.104.22.168.0.
There has been a good deal of activity in the Enhancement Request (ER) Community which is being piloted in Procurement.
A personalized customer experience is already beginning to overtake price and product as the key brand differentiator.
A common request is to produce query zones which are aligned with the base product zones.
How to centralize your SQL Server Event Logs.
Use Extended Events to Get More Information About failed SQL Server Login Attempts.
DAX: Topn is not a Filter
Sometimes, you just want to do a search in a SQL Server database as if you were using a search engine like Google.
Stairway to SQL PowerShell Level 6: PowerShell Modules.
Name Change: MySQL Connect Becomes “MySQL Central @ OpenWorld”.
How to identify and cure MySQL replication slave lag.
MySQL, the original brand, the one developed by the MySQL team at Oracle, is steadily evolving.
Labor Day: Let’s upgrade a Galera Cluster.
Upgrading temporal columns from MySQL 5.5 to MySQL 5.6 format.
Today’s blog post completes our series dedicated to data infrastructure outsourcing, with excerpts from our latest white paper. In the first two posts, we discussed the growing trend toward data outsourcing, and conducted a high level analysis of the strengths and weaknesses of the three sourcing options.
To protect the wellness of an organization’s data infrastructure, there are several criteria that should be evaluated. One critical element is the quality of talent.
The more knowledge and skills team members can bring to the table, the more they can add value to any project they’re involved with. Both breadth and depth of experience are critical to ensure they have a solid understanding of the client’s environment and know how to get the most value out of it right from the start.
Top DBAs and infrastructure experts can command high fees. However, the higher short-term cost associated with investing in quality is more than made up for by the productivity and efficiency gains they deliver. For example, the more experience team members have with systems similar to a new client’s, the faster they can identify processes that can be easily automated, resulting in immediate cost savings and freeing them up to focus on more strategic activities.
To access the full list of criteria, download the rest of our white paper, Data Infrastructure Outsourcing.
Read the first blog post, The Growing Trend Toward Data Infrastructure Outsourcing.
Read the second blog post, Developing a Data Infrastructure Sourcing Strategy.
Well, I took the Oracle 12c OCP upgrade exam this morning and didn’t pass it. I’ve spent many hours over the past weeks and months reading up on new 12c features and it wasn’t enough. I also installed 12c databases and tested out a variety of features to no avail.
Ultimately my downfall was that I tried to prepare for the test from the manuals and the descriptions of what the test covered. Also, I purchased the Kaplan TestPrep software which only covered the first part of the 12c upgrade exam and only included 50 questions.
I should have waited until an upgrade exam book came out and I should have gotten the more comprehensive Transcender software so I would have a more complete set of questions and a better idea what would be covered. I can’t study everything.
If you don’t know the 12c OCP upgrade exam has a new “Key DBA Skills” section which wasn’t present in earlier exams. You have to pass both sections. The first section covers the new 12c features and corresponds to the previous exams. I passed this section, although barely, even though I felt confident going in that I would get a high percentage. The breadth of the topics covered by the second section worried me because I can’t study everything. It definitely covered things I didn’t study including some features I’ve never used.
Both parts of the exam were challenging. It seems like a pretty tough test. I’ve got my Oracle 7, 9,10, and 11 certifications and I’ve passed all of those tests on the first try so this is my first failure. So, I’m trying to regroup and think about where to go from here.
Ideally, I’d like to get Sam Alapati’s book after it comes out on Amazon and get the Transcender software as well but that costs some money. Also, I’m thinking I need to take some time and write-up some study sheets for myself instead of trying to commit everything to memory and hoping I remember during the test.
Anyway, I thought I would share my failure with the community and hope it helps someone else prepare. The truth is that even though it is embarrassing to fail the test I learned things in the process that I can use at my job. It would be great to get the piece of paper and I hope to do so by the end of the year, but I’ve already learned a ton through what I’ve done so far.
We share our skills to maximize your revenue!
As a follow up to a previous video, Pythian CTO Alex Gorbachev gives an overview of the advanced security features within Hadoop.
We have a core group of about 8 of us mostly DBA types but also a couple of developers. Every two weeks we get together to discuss/review a chapter from Tom Kyte's ( latest version ) Expert Database Architecture book.
The idea is that each person is responsible themselves for reading the chapter coming up ( which for as good as Tom Kyte's book is really means reading it multiple times perhaps ). Someone volunteers or is drafted to be the "point person" to guide the discussion of the chapter ... and you rotate through those assignments. At every two weeks well geez it does take a long time to get through a book so that's a downside.
But think about it ... should you do something similar?
After we get through Tom's book the next one up will be Cary Millsap's "Optimizing Oracle Performance" ... ( I think ) ...
I was looking today, for the list of these Diagnostic and Tuning packs components to be sure to not use licensed views in in databases that don’t have those packs. It was not an easy search so I decided to share it with you, if you get lucky you’ll find this page and it will [...]
The Pythian team has received many questions about big data in the cloud, and specifically about Hadoop. Pythian CTO, Alex Gorbachev shares some of his recommendations in our latest video:
As per Oracle 12c documentation, a PDB can
- have its own local temporary tablespace, or
- if it does not have its own temporary tablespace, it can share the temporary tablespace with the CDB.
To demonstrate a PDB sharing the temporary tablespace of CDB, the first step is to create a PDB without a temporary tablespace. By default when a PDB is created from seed, it is created with its local temporary tablespace TEMP and it cannot be dropped as it is the default temporary tablespace of the PDB.
So, the only options I could think of were to
Method – I
- Create a non-CDB without temporary tablespace and the plug it into a CDB
Method – II
- Create a non-CDB / PDB with temporary tablespace,
- Generate its xml file using dbms_PDB
- Edit the xml file to remove the entry for temporary tablespace
- Plug in the non-CDB into a CDB
I will demonstrate the second method. For this purpose, I created a non-CDB orcl2 using DBCA so that it contained default temporary tablespace temp.
– Open the non-CDB in read only mode
ORCL2> shu immediate; startup mount; alter database open read only;
– Generate an XML metadata file for the non-CDB
ORCL2>exec dbms_pdb.describe ('/u01/app/oracle/oradata/orcl2/orcl2.xml');
– Edit the xml file to remove the entry for temp tablespace
[oracle@em12 ~]$ vi /u01/app/oracle/oradata/orcl2/orcl2.xml
– Use the xml file to plug in the non-CDB into CDB2 as PDB_ORCL2
CDB$ROOT@CDB2>create pluggable database pdb_orcl2 using '/u01/app/oracle/oradata/orcl2/orcl2.xml' nocopy;
– Connect to PDB_ORCL2 and run the “$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql” script to clean up the new PDB, removing any items that should not be present in a PDB.
CDB$ROOT@CDB2>connect sys/oracle@pdb_orcl2 as sysdba PDB_ORCL2@CDB2> @?/rdbms/admin/noncdb_to_pdb PDB_ORCL2@CDB2> alter pluggable database open;
– Check that data from non-CDB is available in the new PDB
PDB_ORCL2@CDB2>select count(empno) from scott.emp; COUNT(EMPNO) ------------ 14
– Verify that temporary tablespace has not been created in PDB_ORCL2
PDB_ORCL2@CDB2> select tablespace_name from dba_tablespaces; TABLESPACE_NAME ------------------------------ SYSTEM SYSAUX USERS EXAMPLE
– Verify that tempfile has not been created in PDB_ORCL2
PDB_ORCL2@CDB2> select name from v$tempfile; no rows selected
So, I was able to create a PDB without temporary tablespace. Now I wanted to check if PDB_ORCL2 uses the temp tablespace of the CDB.
– First check that default temporary tablespace TEMP exists for the CDB
CDB$ROOT@CDB2> select tablespace_name from dba_tablespaces; TABLESPACE_NAME ------------------------------ SYSTEM SYSAUX UNDOTBS1 TEMP USERS CDB$ROOT@CDB2> select PROPERTY_NAME, PROPERTY_VALUE from database_properties where upper(PROPERTY_NAME) like '%TEMP%'; PROPERTY_NAME PROPERTY_VALUE ------------------------------ --------------- DEFAULT_TEMP_TABLESPACE TEMP
– Set pga_aggregate_target to its lowest allowed value of 10m to force a sort to spill to the temporary tablespace.
CDB$ROOT@CDB2> alter system set pga_aggregate_target=10m;
– Issue a query in PDB_ORCL2 which will spill to temporary tablespace
PDB_ORCL2@CDB2> select * from dba_objects order by 1,2,3,4,5,6,7; select * from dba_objects order by 1,2,3,4,5,6,7 * ERROR at line 1: ORA-00959: tablespace 'TEMP' does not exist
I expected the query to use temporary tablespace TEMP of CDB but it failed as it expected temp tablespace to exist locally for PDB as its default temporary tablespace is set to TEMP (as was in non-CDB)
PDB_ORCL2@CDB2> col property_name for a30 PDB_ORCL2@CDB2> col property_value for a15 PDB_ORCL2@CDB2> l 1 select PROPERTY_NAME, PROPERTY_VALUE from database_properties 2* where upper(PROPERTY_NAME) like '%TEMP%' PDB_ORCL2@CDB2> / PROPERTY_NAME PROPERTY_VALUE ------------------------------ --------------- DEFAULT_TEMP_TABLESPACE TEMP
– I tried to modify the default temporary tablespace to cdb$root:temp but that attempt also failed.
PDB_ORCL2@CDB2> alter database default temporary tablespace cdb$root:temp; alter database default temporary tablespace cdb$root:temp * ERROR at line 1: ORA-00933: SQL command not properly ended
Hence, I was able to create a PDB without temporary tablespace but could not make it use the temporary tablespace of CDB.
Comments: 3 comments on this itemYou might be interested in this:
- 11g R2 RAC: NODE EVICTION DUE TO MISSING NETWORK HEARTBEAT
- 12c: PLUG IN 12c NON-CDB AS PDB
- 12c: ACCESS EM EXPRESS FOR CDB / PDB / Non-CDB
- AUTOMATIC DEGREE OF PARALLELISM (DOP) - PART - II
- Undocumented Parameters in Oracle 11g
Back in September at Oracle OpenWorld 2013, Larry Ellison announced the Oracle Database In-Memory Option to Oracle 12c. Today, one of Pythian’s Advanced Technology Consultants, Christo Kutrovsky shares his thoughts on the new feature in our latest video. Stay tuned for updates while it’s being tested out in the field.
The beats of big data and symphonies of small data are creating a dazzling contrast in the realm of databases. Oracle, SQL Server, and MySQL are creating new tunes and all these tonal qualities are being captured by this Log Buffer Edition. Relax, sit back and enjoy.
This blog shows how you can write a SQL query to recommend products (cross-sell) to a customer based on products already placed in his current shopping cart.
R provides a rich set of statistical functions that we may want to use directly from SQL. Many of these results can be readily expressed as structured table data for use with other SQL tables, or for use by SQL-enabled applications, e.g., dashboards or other statistical tools.
Insight in the number, type and severity of errors that happen in a test or production environment is crucial to resolve them, and to make a stable ADF application that is less error-prone.
In order to get meaningful database statistics, the TIMED_STATISTICS parameter must be enabled for the database instance. The performance impact of having TIMED_STATISTICS enabled is minimal compared to instance performance.
As you all know that ADS is basically a push technology which send data from server to client without any user intervention . However client will send request periodically to server ask for update. Which further configure in adf-config.xml.
SQL Server 2014 and the DBA: Building Bridges.
RS, SharePoint and Forefront UAG Series – Intro.
Introducing the Microsoft Analytics Platform System – the turnkey appliance for big data analytics.
Progressive Insurance data performance grows by factor of four, fueling business growth online experience.
Version 9.04.0013 of the RML Utilities for x86 and x64 has been released to the download center.
Since the dawn of time, MySQL indexes have a limit: they cannot be descending. Yes, ALTER TABLE and CREATE INDEX can contain the ASC and DESC keywords.
It’s now 3 weeks since the MariaDB & MySQL community day in Santa Clara.
FromDual.en: MySQL Environment MyEnv 1.0.3 has been released.
Managing Percona Xtradb Cluster with Puppet.
Every Relation is in First Normal Form, by definition. Every Table may not.
Partner Webcast – Oracle Webcenter: Center of Engagement – Case Study: Information Strategy implementing Webcenter in Educational Institutions
We share our skills to maximize your revenue!
Nationwide Deploys Database Applications 600% Faster
Heath Carfrey of Nationwide, a leading global insurance and
financial services organization, discusses how Nationwide saves time and
effort in database provisioning with Oracle Enterprise Manager.
- Provisioning Databases using Profiles (aka Gold Images)
- Automated Patching
- Config/Compliance tracking
A quick note on how to install EMCLI which is used for various CLI operations from EM . I was looking to test some Database provisioning automation via EMCLI and thus was looking to setup the same .
To set up EMCLI on the host, follow these steps:
1. Download the emcliadvancedkit.jar from the OMS using URL https://<omshost>:<omsport>/em/public_lib_download/emcli/kit/emcliadvancedkit.jar
2. Set your JAVA_HOME environment variable and ensure that it is part of your PATH. You must be running Java 1.6.0_43 or greater. For example:
o setenv JAVA_HOME /usr/local/packages/j2sdk
o setenv PATH $JAVA_HOME/bin:$PATH
3. You can install the EMCLI with scripting option in any directory either on the same machine on which the OMS is running or on any machine on your network (download the emcliadvancedkit.jar to that machine)
java -jar emcliadvancedkit.jar client -install_dir=<emcli client dir>
4. Run emcli help sync from the EMCLI Home (the directory where you have installed emcli) for instructions on how to use the "sync" verb to configure the client for a particular OMS.
5. Navigate to the Setup menu then the Command Line Interface. See the Enterprise Manager Command Line Tools Download page for details on setting EMCLI.