Feed aggregator

Gluent Demo Video Launch

Tanel Poder - Wed, 2016-03-30 13:58

Although we are still in stealth mode (kind-of), due to the overwhelming requests for information, we decided to publish a video about what we do :)

It’s a short 5-minute video, just click on the image below or go straight to http://gluent.com:

Gluent Demo video

And this, by the way, is just the beginning.

Gluent is getting close to 20 people now, distributed teams in US and UK – and we are still hiring!

 

 

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

SXSWi Recap

Oracle AppsLab - Wed, 2016-03-30 09:05

Austin, beautiful city with a river crossing downtown, music niche, young population, cycling, brisket and the home of SXSW, a big multicultural conference for all tastes; Film, Interactive and Music.

1

This was my first time attending the conference but Noel (@noelportugal), is a year-to-year attendee. It’s well known that this conference is not only a trampoline for small companies and startups to show off all the world what they are cooking up, but also a big exposure for new services, products, trends, you name it; that’s why we are very interested in this kind of conference that are very aligned with our team’s spirit.

VR everywhere.

I mean it.

Since Google I/O 2014, I’ve been following the steps to VR and AR. At that time, they released Google Cardboard; inexpensive googles for visualizing VR content and Project Tango for AR. Yes, I know you can argue VR has been around for quite a long time, but I believe they exposed the right development tools and a cheap way to develop and consume that technology, so a lot of people got engaged. However, some others remained very skeptical about use cases.

2

But now, after two year, guess what? VR is on everyone’s lips, and SXSW wasn’t an exception.

3

I have to say, I’m very impressed at how many companies had adopted this technology so fast. Of course, we all saw this wave coming to us with announcements of products like Oculus Rift, HTC Vive, Noon VR, Microsoft HoloLens and so on. Of course, as emerging technology team, we were already prepared to be hit by the wave.

I still can’t get used to seeing people with a headset over their eyes and headphones on, 100% isolated from reality. I tried most of VR demos presented and my brain/body is still not prepared for many VR experiences; I had headache, and I felt weird after so many demos.

Also, I could see people with red marks all around their faces from wearing the headset all day. Even so, this helped me to analyze and sum up that pretty much all demos follow the same use case: advertising and promoting products.

It’s really interesting that retail and product companies are investing in this technology to get more buyers and explain in a better way how it feels to hold of their product. This can be applied, for example, to automobiles, houses, travel agencies, etc. Funny thing is this technology sometimes is combined with motion to have a complete experience.

Note: don’t ever try a selfie while wearing a VR headset, almost impossible

Change ADF BC Data Update Locking with FOR UPDATE WAIT

Andrejus Baranovski - Wed, 2016-03-30 08:43
Each time when data is changed and updated through ADF BC, before posting changes to DB, SQL query with FOR UPDATE NOWAIT is generated and executed. In case if other process locks row to be updated, or another user in the same moment is updating it, error will be generated and update will be stopped. There might be use cases, when you would like to wait for certain period of time, until row will be unlocked and then commit row changes. This is especially true, if 3rd party (PL/SQL) process is updating rows and you have defined change indicator attribute in ADF BC (see my previous post - ADF BC Version Number and Change Indicator to Track Changed Rows).

We can change default behavior, instead of requesting for immediate lock - we can wait a period of time. If lock becomes available during this period, session acquires lock. If row remains locked, error is returned. Instead of default FOR UDPATE NOWAIT, we can generate FOR UDPATE WAIT (time period in seconds).

To override default behavior, we need to specify custom SQLBuilder class. This can be registered in Application Module configuration jbo.SQLBuilder property:


Class must extend from OracleSQLBuilderImpl and override getSqlVariantLockTrailer() method. My sample application is implemented to return FOR UPDATE WAIT 30:


We can do a test. We can simulate PL/SQL lock by executing SQL query with FOR UPDATE from SQL Developer:


Try to update same row from ADF BC with default behavior, error will be returned - "Failed to lock the record, another user holds the lock". This is expected, because row remains locked:


With overriden SQL Builder, FOR UPDATE WAIT 30 is generated. It waits 30 seconds, as soon as lock is removed - lock from current session is set and row data is updated:


Download sample application - LockUpdateApp.zip.

Logstash and Oracle Database

Kris Rice - Wed, 2016-03-30 07:48
    For anyone that doesn't know what Logstash is head over to http://logstash.net/ and watch the video on the home page.  Robin Moffatt has done a great blog post on the full ELK stack with OBIEE.     This blog post is a first cut at an input filter for Logstash to gather metrics, logs, or anything that can be expressed in sql.  A huge caution that this is a 0.01 attempt and will get better as

Auto DDL: delete obsolete columns from table

Darwin IT - Wed, 2016-03-30 06:55
A quick one. In the past I used to generate ddl based on queries, like the following. But I find myself to re-invent them again. So to have it saved for my offspring: here's one on deleting obsolete columns as generated on importing an excel sheet in SQLDeveloper:


declare
l_schema_name varchar2(30) := 'MY_SCHEMA';
l_table_name varchar2(30) := 'A_TABLE';
cursor c_cols is
select column_name
from all_tab_columns col
where col.table_name = l_table_name
and col.owner = l_schema_name
and col.column_name like 'COLUMN%';
begin
for r_cols in c_cols loop
execute immediate 'alter table '||l_schema_name||'.'||l_table_name||' drop column '||r_cols.column_name;
end loop;
end;
/

And here's one to generate a check constraint on all index colunns of a table:

declare
l_schema_name varchar2(30) := 'MY_SCHEMA';
l_table_name varchar2(30) := 'A_TABLE';
l_constraint_name_pfx varchar2(30) := 'XXX_ALIAS_CHK';
l_idx pls_integer := 1;
cursor c_cols is
select column_name
from all_tab_columns col
where col.table_name = l_table_name
and col.owner = l_schema_name
and col.column_name like 'IND_%';
begin
for r_col in c_col loop
execute immediate 'ALTER TABLE '||l_schema_name||'.'||l_table_name||' ADD CONSTRAINT '||l_constraint_name_pfx||l_idx||' CHECK ('||r_col.column_name||' in (''J'',''N''))ENABLE';
l_idx := l_idx+1;
end loop;
end;
/

New OTN Article – OBIEE Performance Analytics: Analysing the Impact of Suboptimal Design

Rittman Mead Consulting - Wed, 2016-03-30 03:09

I’m pleased to have recently had my first article published on the Oracle Technology Network (OTN). You can read it in its full splendour and glory(!) over there, but I thought I’d give a bit of background to it and the tools demonstrated within.

OBIEE Performance Analytics Dashboards

One of the things that we frequently help our clients with is reviewing and optimising the performance of their OBIEE systems. As part of this we’ve built up a wealth of experience in the kind of suboptimal design patterns that can cause performance issues, as well as how to go about identifying them empirically. Getting a full stack view on OBIEE performance behaviour is key to demonstrating where an issue lies, prior to being able to resolve it and proving it fixed, and for this we use the Rittman Mead OBIEE Performance Analytics Dashboards.

OBIEE Performance Analytics

A common performance issue that we see is analyses and/or RPDs built in such a way that the BI Server inadvertently returns many gigabytes of data from the database and in doing so often has to dump out to disk whilst processing it. This can create large NQS_tmp files, impacting the disk space available (sometimes critically), and the disk I/O subsystem. This is the basis of the OTN article that I wrote, and you can read the full article on OTN to find out more about how this can be a problem and how to go about resolving it.

OBIEE implementations that cause heavy use of temporary files on disk by the BI Server can result in performance problems. Until recently in OBIEE, it was really difficult to track because of the transitory nature of the files. By the time the problem had been observed (for example, disk full messages), the query responsible had moved on and so the temporary files deleted. At Rittman Mead we have developed lightweight diagnostic tools that collect, amongst other things, the amount of temporary disk space used by each of the OBIEE components.

pad_tmp_disk

This can then be displayed as part of our Performance Analytics Dashboards, and analysed alongside other performance data on the system such as which queries were running, disk I/O rates, and more:

OBIEE Temp Disk Usage

Because the Performance Analytics Dashboards are built in a modular fashion, it is easy to customise them to suit specific analysis requirements. In this next example you can see performance data from Oracle being analysed by OBIEE dashboard page in order to identify the cause of poorly-performing reports:

OBIEE Database Performance Analysis

We’ve put online a set of videos here demonstrating the Performance Analytics Dashboards, and explaining in each case how they can help you quickly and accurately diagnose OBIEE performance problems.

You can read more about our Performance Analytics offering here, or get in touch to find out more!

The post New OTN Article – OBIEE Performance Analytics: Analysing the Impact of Suboptimal Design appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Machine Intelligence for Enterprise IT

Gerger Consulting - Wed, 2016-03-30 01:59
The need to gain actionable insights from ever increasing data sets is a huge problem for BI professionals. Attend our webinar presented by Numenta, a machine intelligence company founded by Jeff Hawkins (the founder of Palm Inc.), and learn how machine intelligence can help you solve real world enterprise IT problems.

Watch the short video about Numenta's story and the importance of their approach to machine learning:



If you want to learn how to use machine intelligence for analytics, this is the webinar to attend.






Categories: Development

An Important Change Coming for Oracle Application Express in Oracle Database 12cR2

Joel Kallman - Tue, 2016-03-29 21:20
A minor but important change is happening for Oracle Application Express in the forthcoming Oracle Database 12cR2.  Specifically, Oracle Application Express will not be installed by default in the Oracle Database.  This change was made specifically at our request.  We thought the pros far outweighed the cons, and we thought this was good for our customers and consistent with our recommendations.

Pros
  1. Provides flexibility for a DBA to run multiple APEX versions in an Oracle Multitenant Container Database.
  2. Customers are always advised to install latest version of APEX.  The version of APEX that is bundled with the Oracle Database is quickly out of date.
  3. Reduces the Oracle Database upgrade time if APEX is not installed.
  4. Will result in less space consumption on disk and in the Database.
  5. Consistent with the deployment of Application Express in the Oracle Database Cloud
  6. Consistent with our recommendations in Oracle documentation and from Mike Dietrich, Product Manager for Oracle Database Upgrade & Migrations.
  7. Reduces the attack surface for Oracle Multitenant Container Databases which do not require APEX.
Cons
  1. Requires more steps for customers to get up and running with APEX in a new Oracle Database.  (Granted, this would be with the version of APEX that is bundled with the Oracle Database, which as cited earlier, can get out of date rather quickly).

With all this said, a few things remain unchanged:
  • Oracle Application Express will continue to be a fully supported and included Oracle Database feature
  • Oracle Application Express will continue to ship with the Oracle Database 12cR2, in directory $ORACLE_HOME/apex.
  • It will continue to be supported to install and run Oracle Application Express in the "root" of an Oracle Multitenant Container Database
  • It will continue to be supported to install and run Oracle Application Express locally in a pluggable database in an Oracle Multitenant Container Database
  • Oracle Application Express will be an installable component in the Oracle Database Creation Assistant (DBCA)
The only thing that's changing is the out-of-the-box configuration of APEX, to not be installed by default.


Oracle Coherence Use in the product

Anthony Shorten - Tue, 2016-03-29 18:52

One of the most common questions I get from people is about the use of Oracle Coherence in our product.

We bundle a subset of the Oracle Coherence libraries for use in our Batch Architecture. The Coherence libraries permit our threadpools to be clustered and communicate (via Coherence) to each other in an efficient manner. This includes our submitters (the threads that are submitted) as well as the threadpools.

We introduced Oracle Coherence to our batch architecture in previous releases to manage our architecture and there are a few things that need to be clarified about the support:

  • We bundle a set of Coherence libraries that are used by the product. The libraries are only a subset of the full Coherence stack. They represent a Restricted Use License (RUL) for the provided use (i.e. managing the batch cluster).  The libraries are listed in the ouaf_jar_versions.txt file in the etc directory of the product installation. You do not need to purchase the Oracle WebLogic with Coherence to use the libraries for their licensed purpose.
  • As part of the Restricted Use License you cannot use the libraries in customizations so you cannot extend past the provided use. If you want to extend the use of Coherence in your custom solutions then you will need to purchase a FULL additional license for Oracle Coherence.
  • As the libraries are a subset of what is available in Oracle Coherence, it is NOT recommended to use the Oracle Coherence pack for Oracle Enterprise Manager with our products. This is because the pack assumes you are using the full stack and can return erroneous information when attempting to use it with the batch cluster.

Essentially we bundle a subset of Coherence libraries we use internally for our clustered batch architecture. These are locked down to use for that clustering purpose only. You do not need to extend the license to use them for this purposes. If you want to use them beyond this purpose, then you can purchase a full license if desired.

Python DBA Graphs Github Repository

Bobby Durrett's DBA Blog - Tue, 2016-03-29 17:40

I decided to get rid of the Github repository that I had experimented with and to create a new one. The old one had a dump of all my SQL scripts but without any documentation. But, I have updated my Python graphing scripts a bit at a time and have had some recent value from these scripts in my Oracle database tuning work. So, I created a Github repository called PythonDBAGraphs. I think it will be more valuable to have a repository that is more focused and is being actively updated and documented.

It is still very simple but I have gotten real value from the two graphs that are included.

Bobby

Categories: DBA Blogs

Collaborate 2016

Jim Marion - Tue, 2016-03-29 17:28

I just wanted to post a quick note with my schedule for Collaborate 2016. I will be presenting PeopleSoft Developer Tips and Techniques on April 13th at 9:15 AM in Reef C. My publisher assured me that I will have a box of books waiting for me at the conference and we plan to give them away at various sessions during the conference. Here are a couple of other sessions I recommend attending at Collaborate:

There is one session I want to specifically highlight: A Designers Intro to the Oracle JET Framework. PeopleTools 8.55 includes Oracle JET, Oracle's brand new open source JavaScript Extension. You may never directly interact with Oracle JET, but it is always good to understand the tools and frameworks used by PeopleTools. Oracle JET is based on common JavaScript libraries such as RequireJS and Knockout and PeopleTools includes these related open source libraries (note: I have written about using RequireJS with PeopleSoft in prior posts).

10 Cloverfield Lane

Tim Hall - Tue, 2016-03-29 17:21

I just got back from watching 10 Cloverfield Lane. The trailers don’t really give much away, but don’t watch them anyway, just in case. Also, don’t read any reviews with spoilers before seeing it. I think it will still work if you know the story, but why ruin it?

Wow! What a movie! Wow!

The story is great. The acting is great. John Goodman is great. I didn’t really know what the story was before going in. I had just seen the trailer embedded above.

I’m so glad I went to see this film! I don’t want to say anything more for fear of ruining it for someone…

Cheers

Tim…

10 Cloverfield Lane was first posted on March 29, 2016 at 11:21 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Why Now is the Time for the Oracle SCM Cloud

Linda Fishman Hoyle - Tue, 2016-03-29 16:03

A Guest Post by SVP Rick Jewell, Oracle Applications Development (pictured left)

Cloud is the solution of choice in CRM, HCM, and most recently, ERP. But what about supply chain? We know that more than 80 percent of all companies either are using, or plan to use, cloud applications for SCM―that percentage is even higher for supply chain leaders. And those percentage shares increase with every market survey.

We also know that innumerable positive cloud experiences have primed the pump for a broad-based transition of SCM to cloud. Those experiences have come from other functional areas, as well as from SCM cloud-leading areas such as Transportation Management.

But what has been missing is the lack of new, enterprise-class solutions for SCM, built for the cloud. Now all that has changed with Release 11―and our timing could not be better.

Broader, Better, Faster

Yet is it sufficient to take the same SCM solutions that were designed and developed decades ago, for a world that no longer exists, and just deploy them in the cloud? A lot of our competitors seem to think so, but we did not. Here’s why.

Modern supply chains have transformed dramatically, and continue to transform as the nature of markets, of products and services, and of trading party interactions constantly evolves. The rigidity, complexities, and integration challenges of a typical legacy SCM deployment make them a business inhibitor, not a business enabler.

To address these challenges we chose the hard road. That road included an all-new set of SCM capabilities, built from the ground up with architectural flexibility, a rich yet easy to use user experience, multi-enterprise support, and broad, cross-functional business flows. We built supply chain solutions for 21st century supply chains―all designed for and delivered in the cloud. And that value proposition only gets better when you consider all of the other leading cloud solutions from Oracle.

Transition to SCM Cloud

We also offer flexible paths to Oracle SCM Cloud. Every business has its own combination of business pressures and ability to adopt change; therefore, we designed the Oracle SCM Cloud to provide multiple ways to transition to the cloud. We encourage all of our customers and prospects to engage with Oracle to develop their own roadmaps―whether it’s innovation at the edge, shared-service deployment of a business function, wholesale replacement of a division, or incremental enterprise deployment. The options are there, supported by integrations and by implementation expertise.

For more information, go to cloud.oracle.com/scm.

7 Bonus Blog Posts to Usher in Oracle HCM World

Linda Fishman Hoyle - Tue, 2016-03-29 15:44
There will be legions of HCM professionals descending on Chicago, IL, next week for Oracle HCM World. They will come to hear from industry influencers, share best practices, and learn how to modernize their business practices. Transformation and digital disruption will be hot topics.

Here are seven articles published in the last few weeks to promote the content at the conference. They are meaty in and of themselves, so scan the titles and dig in.

Why Your People Are Leaving and Your Recruiters Are Busy: Research shows that people are leaving their jobs, not for more money, but for the opportunity to grow and advance. Therefore, it’s time to energize the activities around learning and development. This article, by Oracle’s Sunita Khatri, is a precursor to similar content in the How to Grow Your People track at the conference.

Are You Following These 3 Laws of Attraction to Get the People You Want?: Matt Charney of Recruiting Daily states that despite all the new and emerging techniques to find potential employees, businsses still struggle to maximize their acquisition efforts. As a forerunner to his session in the Finding People track at HCM World, he writes about three laws of attraction to find the right people.

HR Analytics Pitfalls to Avoid: There are so many advantages of business intelligence and data-based decision-making, but tapping into the right data sources and reporting that intelligence back to key decision-makers can pose challenges. This article is written by Oracle’s Lauren McKay. There is an entire track dedicated to workforce analytics at the conference.

No More Silos: How HR Can Take Its Seat at the Table: Matt Charney, executive director of Recruiting Daily, believes HR has a big branding problem. He suggests that HR professionals approach their processes, policies, and platforms with a marketing mentality to boost their department’s brand and become a business partner for the business. Charney will present a session entitled “Marketing for HR” at the conference.

From Hindrance to Hero: How HR Can Reclaim Its Rightful Role: Peter Cappelli is the Management professor at the Wharton School and keynote speaker at Oracle HCM World. In this article, Cappelli offers five actions that HR professionals can take to sustain their role as strategic partner.

3 Trends That May Change the Way You Pay and Reward People: In this article, Oracle’s Chad Atwell offers one recommendation—stay flexible—to those managing total rewards in an organization. There will be a Paying People in 2016 and Beyond track at the conference. Atwell will present with customers Ansell and Compass Minerals on how to best implement and utilize compensation technology.

How to Approach Your HR Transformation: Stacey Harris, Vice President of Research and Analytics for Sierra-Cedar, starts the conversation about choosing one of many paths for a successful HR transformation based on a company’s specific needs. Harris is a speaker at the conference.

EBS 12.1.3 Data Masking Template Certified with EM 12.1.0.5

Steven Chan - Tue, 2016-03-29 15:37

(Contributing author:  Nirzari Raichura)

We're pleased to announce the certification of the E-Business Suite 12.1.3 Data Masking Template for the Data Masking Pack with Enterprise Manager Cloud Control 12.1.0.5.

You can use the Oracle Data Masking Pack with Oracle Enterprise Manager Grid Control 12c to mask sensitive data in cloned E-Business Suite environments.  Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function. 

How data masking works

You may scramble data in E-Business Suite cloned environments with EM12c using the following template:

What does data masking do in E-Business Suite environments?

Application data masking does the following:

  • De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number.
  • Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.  
  • Maintain data validity:  Provide a fully functional application.

Support Note:

Related Articles

Categories: APPS Blogs

Oracle WebCenter Content for Primavera

WebCenter Team - Tue, 2016-03-29 15:10

Author: Mitchell Palski – Oracle Fusion Middleware Specialist 

Having a content repository integrated with Unifier can turn cluttered, unstructured content into organized assets by making it easier to catalog, access, search, and reuse documentation. Oracle WebCenter can serve as an external content repository and has zero effect on the end-user’s experience interacting with Primavera.

When attaching a document or drawing to a project in Unifier, the interface does not present anything about the underlying content repository which stores the file.


With WebCenter Content (WCC) being used as a content repository for Unifier, the same mechanical HVAC drawing attached to the drawing package is stored in WCC as indicated below.

Within WCC, Unifier creates a unique folder structure to categorize and store documents. The actual file name is an encrypted file name generated when the file is stored in the repository.

Those documents can now be:

  • Indexed for search
  • Entered into external workflows
  • Categorized into categories for retention and disposition
  • Accessed via the WebCenter web browser UI, desktop tools, or mobile app
  • Integrated into external applications and tools via native APIs and web services 
The best part? WebCenter is easy to install and configure with Primavera!
  1. Install WebCenter
  2. Create six custom metadata fields for data that comes from Primavera
  3. Enter WebCenter’s connection information on the Primavera “Repository” tab
If you want to enterprise-grade document management and retention to supplement your instance of Primavera, start with Oracle.

Forgot EBS 12.2 : “Invalid credentials passed” issue while starting Admin Server

Online Apps DBA - Tue, 2016-03-29 11:43

 Architecture in Oracle EBS R12.2 changed and Oracle introduced WebLogic Server. This was covered by Atul in his previous post on changes in EBS12.2.    WebLogic Server contains Admin Server in a Domain  and you use script adadminsrvctl to start Admin Server in Oracle EBS Domain. You need username/password to start/boot Admin Server and this […]

The post Forgot EBS 12.2 : “Invalid credentials passed” issue while starting Admin Server appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

In Depth: MySQL 5.6+ DDL

Pythian Group - Tue, 2016-03-29 10:07
Overview

DDL (Data Definition Language) statements create, alter, and remove database objects. These types of changes can be a very dangerous action to take on such a critical piece of your infrastructure. You want to make sure that the command that you are executing has been given proper thought and testing.

In this post I go through multiple version of MySQL and verify the best course of action to take in regards to executing DDL statements.  There are many things that you have to consider when making these types of changes, such as disk space, load on the database server, slave replication, the type of DDL statement you are executing, and if it will lock the table. 

Because of these risks, there are tools that can be used to help mitigate some of the dangers. But unless you have tested and verified their functionality, these tools in themselves can cause trouble. Whenever in doubt, take the time to test and verify any changes that you will make. In my testing I will be using :

pt-online-schema-change in particular since it is a very popular tool and I have used it many times.  Also, the primary reason it was created was before MySQL offered online DDL changes. In some cases, depending on your environment, the best course of action may be removing the database server from being accessed, by failing over to a slave, or taking a cluster node offline.

I will be focusing on the most common DDL statements as I want to keep this post to a reasonable size. Many of the MySQL DDL statements by default are using the INPLACE algorithm where it is able, which is only available in MySQL 5.6 or later. In earlier versions 5.5 and 5.1 with the InnoDB plugin they had fast index creation but all other table alters were blocking. Online DDL with the INPLACE algorithm allows MySQL to make a copy of the table in the background, copy the data to this table, make your table alters, and then swap the tables, all without locking the table. Some DDL statements can be done instantaneously, such as dropping an index or renaming a column. When MySQL isn’t able to use the INPLACE algorithm it will have to revert to using the COPY algorithm which will in turn lock the table. An example of this is changing a column definition from VARCHAR to BLOB. Whenever you are doing an INPLACE alter you will want to specify the algorithm in your command. This will help protect you in the case that MySQL is unable to do an INPLACE alter. MySQL will return an error rather than running the command with the COPY algorithm.


ALTER TABLE employee_test ALGORITHM=INPLACE, CHANGE COLUMN first_name first_name BLOB NULL;
ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY.

All of my testing was done without specifying the algorithm, allowing MySQL to determine the best algorithm to use.  If there are any DDL statements that you want more information on, please refer to the documentation for the release of MySQL that you are using, as I will not be going into foreign keys.

The Setup

All of my testing was done in virtual machines (VMs) on my laptop. I have a VM that will be running mysqlslap to perform remote DML statements such as SELECT, UPDATE, DEELTE and INSERT, causing load on the database server. This will allow me to see any potential table locks or performance impact. Here is the setup of the MySQL machine and it’s components. I have created the table shown below and imported 10 million rows. While mysqlslap was running I performed each of the DDL statements and watched that the DML statements were being executed with no table locks. I then recorded the time as they completed.

MySQL Server Stats
  • CPU : 4x CPUs at 2.6 GHz Intel Core i7
  • Memory allocated to VM : 2 Gig
  • Memory allocated to MySQL Innodb buffer pool: 1 Gig
  • Flash Storage
  • Table has 10 Million Rows.
  • DML (Data Manipulation Language) statements such as select, insert, update, and delete, that will be executed against the table during DDL statements
Table Structure
CREATE TABLE `employee_test` (
`emp_no` int(11) NOT NULL AUTO_INCREMENT,
`birth_date` date NOT NULL,
`first_name` varchar(14) NOT NULL,
`last_name` varchar(16) NOT NULL,
`gender` enum('M','F') NOT NULL,
`hire_date` date NOT NULL,
PRIMARY KEY (`emp_no`),
KEY `ix_lastname` (`last_name`),
KEY `ix_firstname` (`first_name`)
) ENGINE=InnoDB AUTO_INCREMENT=10968502 DEFAULT CHARSET=latin1
MySQL DDL Commands
CREATE INDEX ix_hire_date ON employee_test (hire_date); --CREATE INDEX
CREATE FULLTEXT INDEX ix_lastname_fulltext ON employee_test(last_name); --CREATE FULLTEXT INDEX
DROP INDEX ix_hire_date ON employee_test; --DROP INDEX
OPTIMIZE TABLE employee_test; --OPTIMIZE TABLE
ALTER TABLE employee_test ADD COLUMN test_column INT NULL; --ADD COLUMN
ALTER TABLE employee_test DROP COLUMN f_name; --DROP COLUMN
ALTER TABLE employee_test CHANGE first_name f_name varchar(14) NOT NULL; --RENAME COLUMN
ALTER TABLE employee_test MODIFY COLUMN emp_no BIGINT AUTO_INCREMENT NOT NULL; --CHANGE COLUMN TYPE
pt-online-schema-change DDL Commands
pt-online-schema-change --execute --alter 'ADD FULLTEXT INDEX ix_lastname_fulltext (last_name)' D=employees,t=employee_test
pt-online-schema-change --execute --alter 'ENGINE=INNODB' D=employees,t=employee_test
pt-online-schema-change --execute --alter 'ADD COLUMN test_column3 INT NULL' D=employees,t=employee_test
pt-online-schema-change --execute --alter 'MODIFY COLUMN gender BLOB NULL' D=employees,t=employee_test
Results

This matrix is a representation of the testing that I performed and how quickly the commands took to execute. Be careful with Fulltext indexes on your tables since they potentially can cause additional locking by creating the necessary infrastructure in the background. Please see MySQL Innodb Fulltext Indexes for more details. This requirement causes a great deal of locking on the table.

DDL Matrix

pt-online-schema-change

For the DDL statements that cause locking of the table we wanted to look at incorporating pt-online-schema-change, to help us overcome this obstacle.

pt-online-schema-change results

pt-online-schema-change allowed us to perform the operations that locked the table previously with no locking. pt-onilne-schema-change also has many other features such as helping with the impact on slave replication, and handling foreign keys. But it also has it’s limitation such as not being able to run it on a table that already has triggers, or complications with foreign keys. There are also impacts on your environment if it is not properly tested and verified. One such example is, every time that I ran pt-online-schema-change in my test it caused a deadlock causing mysqlslap to die and no longer perform and further statements.

mysqlslap: Cannot run query UPDATE employee_test SET first_name = ‘BigPurpleDog’ WHERE last_name = ‘SmallGreenCat’; ERROR : Deadlock found when trying to get lock; try restarting transaction

This is why it is very important to try and determine the impact if any that pt-online-schema-change may have on your environment before starting to use it. I did not encounter this behavior with any of the MySQL DDL statements that I ran.

Performance Impact

While performing the changes there were consistent increases in CPU load, disk I/O, and disk usage as the new tables were being created for the table alters. We have to remember that when certain DDL statements are being executed, a full copy of the table is being performed, so you will want to make sure you have enough disk space to complete the change.  This is why it is very important to take into consideration the size of the table you are altering and the load on the MySQL server while performing DDL statements. It is preferred that you run any of the DDL statements that cause table copies, off hours as to avoid any delays or outages to the application that is using the data.

Query Execution Impact

Query Execution Baseline

Server Performance Impact

MySQL Alter Load
MySQL Alter Load

Conclusion

As I have observed in performing these tests, there are many things to consider when performing DDL statements to avoid potential downfalls. Here is a summary of the recommendations to executing DDL statements or using pt-online-schema-change. Before considering any of this determine if the statement you are going to perform is going to copy a table, and if it does, make sure you have enough disk space.

Without Fulltext
With Fulltext

If you are going to make changes to your production servers, make sure that you run your DDL statements during off hours when the server is at it’s lowest utilization for both CPU and disk.

For an added safety measure when you are performing any of the MySQL DDL statements that you are expecting to be executed INPLACE and will not lock the table, make sure you specify ALGORITHM=INPLACE in your statement. If MySQL is unable to execute the command in place, it will just return an error, instead of executing the statement with the COPY algorithm which will lock the table. Here are samples of the DDL statements that you should be able run INPLACE and not cause any locking of your table.

ALTER TABLE employee_test ALGORITHM=INPLACE, ADD INDEX ix_hire_date (hire_date); --CREATE INDEX
ALTER TABLE employee_test ALGORITHM=INPLACE, DROP INDEX ix_firstname; --DROP INDEX
ALTER TABLE employee_test ALGORITHM=INPLACE, ENGINE=INNODB; --OPTIMIZE TABLE
ALTER TABLE employee_test ALGORITHM=INPLACE, ADD COLUMN test_column INT NULL; --ADD COLUMN
ALTER TABLE employee_test ALGORITHM=INPLACE, DROP COLUMN f_name; --DROP COLUMN
ALTER TABLE employee_test ALGORITHM=INPLACE, CHANGE first_name f_name varchar(14) NOT NULL; --RENAME COLUMN

 

 

References

 

 

Categories: DBA Blogs

DataStax OpsCenter upgrade (4.1 to 5.1) for Cassandra – issue and resolution

Pythian Group - Tue, 2016-03-29 09:42

For the Apache Cassandra cluster (version C* 1.2.16) that I’ve supported, the monitoring of the cluster is through DataStax OpsCenter, version 4.1.2. As part of the effort to improve the monitoring capability for this cluster, my team decided first to upgrade OpsCenter to version 5.1.4, the latest available version of OpsCenter that is compatible with Cassandra 1.2.16. The same OpsCenter is also used to monitor another cluster of DataStax Enterprise (DSE) 4.5.2 (it corresponds to Apache Cassandra version 2.0.10).

During the upgrade we ran into an issue, and  we couldn’t find a similar problem to this one on Google. We’d like to use this post to document the problems that we faced as well as the solutions and findings we found during the upgrade.

 

Problem Overview

The OpsCenter upgrade procedure is as straightforward as what is described in the DataStax OpsCenter document. After OpsCenter upgrade, the OpsCenter web portal detects mismatched version of datastax-agents on all nodes. Choose the “FixMe” option from the portal to upgrade datastax-agents to version 5.1.4 on all nodes being monitored. After the datastax-agent upgrade, we addressed some datastax-agent configuration issues in “address.yaml” file to reflect the changes brought by the new version of OpsCenter/datastax-agent.

After all this was done, we double checked the log files for OpsCenter and datastax-agent. The OpsCenter log file was mostly clear, the datastax-agent log for DSE 4.5.2 cluster was also clear, but the datastax-agent log for Cassandra 1.2.16 was NOT. The corresponding OpsCenter web portal was not able to display Cassandra metrics for C* 1.2.16 cluster.

On each of the datastax-agent log files in the C* 1.2.16 cluster, we saw a lot of repeating  errors like the ones below:

          ERROR [async-dispatch-3] 2016-02-19 12:57:52,750 There was an error when attempting to load stored rollups.
          com.datastax.driver.core.exceptions.InvalidQueryException: Undefined name key in where clause (‘key EQ ‘<… …>”)
          at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
          at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:291)
          at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:205)
          at clojurewerkz.cassaforte.client$execute.invoke(client.clj:289)
          … …
          ERROR [cassandra-processor-1] 2016-02-19 13:00:02,295 Error when proccessing cassandra callcom.datastax.driver.core.exceptions.InvalidQueryException: Unknown identifier key

 

Problem Analysis and Solution

The fact that the error showed up in datastax-agent log file gave me a hint that the error might be related with datastax-agent failing to write collected metrics into OpsCenter tables. So as the first step of the analysis, I compared the schema of “OpsCenter” keyspace between the two clusters monitored. Below is the example of two OpsCenter table definition comparison between the two clusters.

C* 1.2.16 Cluster

DSE 4.5.3 Cluster

CREATE TABLE events (  “KEY” blob,  column1 blob,  value blob,  PRIMARY KEY (“KEY”, column1)CREATE TABLE events (  key text,  action bigint,  level bigint,  success boolean,  time bigint,  PRIMARY KEY ((key))CREATE TABLE events_timeline (  “KEY” blob,  column1 bigint,  value blob,  PRIMARY KEY (“KEY”, column1)CREATE TABLE events_timeline (  key text,  column1 bigint,  value blob,  PRIMARY KEY ((key), column1)

 

From this table, we can clearly see that the upgrade process of OpsCenter and datastax-agent to verion 5.1.4 somehow doesn’t migrate OpsCenter schema properly for C* 1.2.16 cluster. The theory for the error is that the upgraded datastax-agent in C* 1.2.16 cluster is trying to query or update Cassandra metrics from OpsCenter tables in a fashion that matches the OpsCenter schema as in the DSE 4.5.2 cluster. But the actual OpsCenter schema in C* 1.2.16 still has the old definition, thus causing the invalid query exception as presented in the log file.

Once the problem is clear, the solution is straightforward. The steps are summarized below:

In C* 1.2.16 cluster,

  1. Take a snapshot for OpsCenter keyspace on all nodes
  2. Stop DataStax agents on all nodes, so they won’t try to write metrics into OpsCenter tables.
  3. Use CQL to drop OpsCenter tables and re-create them, matching the OpsCenter schema for DSE 4.5.3 cluster. Make sure that all table properties are the same.
  4. Once OpsCenter schema is recreated. Start DataStax agents on all nodes in.
  5. Verify the agent log file that the error message is gone.
  6. Restart OpsCenter service.

 

After these steps, we double checked the log files for all datastax-agents, and for OpsCenter, and we can confirm that there were no errors. The OpsCenter web portal was also able to display the Cassandra metrics properly.

Categories: DBA Blogs

The NoSQL- Relational Comparison Series Continues- Relational SQL vs MongoDB Methods

Chris Foot - Tue, 2016-03-29 09:01

When we compare relational and NoSQL Systems, one of the critical analyses we have to perform is data access mechanisms.  As we’ll learn over the next few articles of this series on data access, the SQL language used by relational database management systems is much more feature rich and powerful than its NoSQL counterpart.  This statement isn’t intended to sway readers to relational systems, it is just the author’s evaluation of both systems’ access languages.  

Pages

Subscribe to Oracle FAQ aggregator