My defined index was on (OWNER, OBJECT_TYPE, OBJECT_NAME) --- defined as being from the column with the fewest distinct values to the most. This ordering is best compressible with Index Key Compression (also known as Prefix Compression). If I reverse the order, Index Key Compression for the two leading columns wouldn't deliver the same level of compression. The question is whether Advanced Index Compression can intelligently handle the reversal.
SQL> create index target_data_ndx_3_comp on
2 target_data(object_name, object_type, owner) compress 2;
SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_3_COMP');
PL/SQL procedure successfully completed.
SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_3_COMP'
Surprisingly, this index with Prefix 2 on (OBJECT_NAME, OBJECT_TYPE) is, at 3,091 leaf blocks, smaller than the previous index with Prefix 2 on (OWNER, OBJECT_TYPE) at 5,508 leaf blocks.
Continuing with Prefix 3
SQL> drop index target_data_ndx_3_comp;
SQL> create index target_data_ndx_3_comp on
2 target_data(object_name, object_type, owner) compress 3;
SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_3_COMP');
PL/SQL procedure successfully completed.
SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_3_COMP'
At 2,277 leaf blocks it is, as expected, the same size with Prefix 3 on (OWNER, OBJECT_TYPE, OBJECT_NAME). Since the entire index key is specified as the Prefix, both indexes would be the same size.
Going on to Advanced Index Compression
SQL> drop index target_data_ndx_3_comp;
SQL> create index target_data_ndx_4_advcomp on
2 target_data(object_name, object_type, owner)
3 compress advanced low
SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_4_ADVCOMP');
PL/SQL procedure successfully completed.
SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_4_ADVCOMP'
This is, again, as expected. Advanced Index Compression results in the same size irrespective of the ordering of the columns.
The advantage of Advanced Index Compression over Key or Prefix Compression is that the DBA does not need to determine the Prefix for compression. He does not have to spend time to analyze the data and compare the number of distinct values for each of the columns in the composite index.
MySQL uses metadata locking to manage concurrent access to database objects, and to ensure data consistency when performing modifications to the schema: DDL operations. Metadata locking applies not just to tables, but also to schemas and stored programs (procedures, functions, triggers, and scheduled events).
In this post I am going to cover metadata locks on tables and triggers, that are usually seen by DBAs during regular operations/maintenance.
Kindly refer to these 4 different connections to MySQL Instance:
The screenshot shows that the uncommitted transaction may cause metadata lock to ALTER operations. The ALTER will not proceed until the transaction is committed or rolled-back. What is worse, after the ALTER is issued, any queries to that table (even simple SELECT queries) will be blocked. If the ALTER operation is an ONLINE DDL operation available in 5.6+, queries will proceed as soon as the ALTER begins.
Refer to this video tutorial on MySQL Metadata Locks for further context.
These days we have a “DBAs” favourite tool “pt-online-schema-change” (osc). Let’s have a look what will happen If we run osc instead of ALTER.
OSC gets stuck at metadata lock at the point of creating triggers on table.
Let’s jump on the second topic how can we mitigate MDL issues:Mitigating the MetaData Lock Issues
There are various solutions to tackling MDL:
- Appropriate setting of wait_timeout variable which will kill stuck/sleep threads after a certain time.
- Configure pt-kill to get rid of stuck/sleep threads
- Fix code where transactions are not committed after performing DB queries
If you are on RDS and your MySQL is having bunch of Sleep threads and you don’t know which connection is causing metadata lock, then you have to kill all the Sleep queries which are in mysql for more than a certain time. As we know “kill thread_id” is not permitted in RDS, but you can use the query below to get the exact queries to kill Sleep threads.
mysql> SELECT CONCAT('CALL mysql.rds_kil ( ',id,')',';') FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND='Sleep' AND TIME > 10 ; +---------------------------------------------+ | CONCAT('CALL mysql.rds_kill ( ',id,')',';') | +---------------------------------------------+ | CALL mysql.rds_kill ( 5740758); | | CALL mysql.rds_kill ( 5740802); | | CALL mysql.rds_kill ( 5740745); | | CALL mysql.rds_kill ( 5740612); | | CALL mysql.rds_kill ( 5740824); | | CALL mysql.rds_kill ( 5740636); | | CALL mysql.rds_kill ( 5740793); | | CALL mysql.rds_kill ( 5740825); | | CALL mysql.rds_kill ( 5740796); | | CALL mysql.rds_kill ( 5740794); | | CALL mysql.rds_kill ( 5740759); | | CALL mysql.rds_kill ( 5740678); | | CALL mysql.rds_kill ( 5740688); | | CALL mysql.rds_kill ( 5740817); | | CALL mysql.rds_kill ( 5740735); | | CALL mysql.rds_kill ( 5740818); | | CALL mysql.rds_kill ( 5740831); | | CALL mysql.rds_kill ( 5740795); | | CALL mysql.rds_kill ( 4926163); | | CALL mysql.rds_kill ( 5740742); | | CALL mysql.rds_kill ( 5740797); | | CALL mysql.rds_kill ( 5740832); | | CALL mysql.rds_kill ( 5740751); | | CALL mysql.rds_kill ( 5740760); | | CALL mysql.rds_kill ( 5740752); | | CALL mysql.rds_kill ( 5740833); | | CALL mysql.rds_kill ( 5740753); | | CALL mysql.rds_kill ( 5740722); | | CALL mysql.rds_kill ( 5740723); | | CALL mysql.rds_kill ( 5740724); | | CALL mysql.rds_kill ( 5740772); | | CALL mysql.rds_kill ( 5740743); | | CALL mysql.rds_kill ( 5740744); | | CALL mysql.rds_kill ( 5740823); | | CALL mysql.rds_kill ( 5740761); | | CALL mysql.rds_kill ( 5740828); | | CALL mysql.rds_kill ( 5740762); | | CALL mysql.rds_kill ( 5740763); | | CALL mysql.rds_kill ( 5740764); | | CALL mysql.rds_kill ( 5740773); | | CALL mysql.rds_kill ( 5740769); | | CALL mysql.rds_kill ( 5740770); | | CALL mysql.rds_kill ( 5740771); | | CALL mysql.rds_kill ( 5740774); | | CALL mysql.rds_kill ( 5740784); | | CALL mysql.rds_kill ( 5740789); | | CALL mysql.rds_kill ( 5740790); | | CALL mysql.rds_kill ( 5740791); | | CALL mysql.rds_kill ( 5740799); | | CALL mysql.rds_kill ( 5740800); | | CALL mysql.rds_kill ( 5740801); | | CALL mysql.rds_kill ( 5740587); | | CALL mysql.rds_kill ( 5740660); | +---------------------------------------------+ 53 rows in set (0.02 sec)
- Capture sql queries to kill Sleep threads
mysql -htest-server.us-west-2.rds.amazonaws.com. –skip-column-names -e ‘SELECT CONCAT(“CALL mysql.rds_kill ( “,id,”)”,”;”) FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND=”Sleep” AND TIME > 10’ > kill_sleep_threads.sql
2.Execute queries from mysql prompt
mysql -htest-server.us-west-2.rds.amazonaws.com. mysql> source kill_sleep_threads.sqlImprovements in MySQL 5.7 related to MDL
Generally, we would want to kill as few connections as possible. But the trouble with metadata locks prior to 5.7 is that there is no insight available into which threads are taking the metadata lock. In MySQL 5.7, there are several improvements in getting insight into metadata lock information.
“The Performance Schema now exposes metadata lock information:
- Locks that have been granted (shows which sessions own which current metadata locks)
- Locks that have been requested but not yet granted (shows which sessions are waiting for which metadata locks).
- Lock requests that have been killed by the deadlock detector or timed out and are waiting for the requesting session’s lock request to be discarded
This information enables you to understand metadata lock dependencies between sessions. You can see not only which lock a session is waiting for, but which session currently holds that lock.
The Performance Schema now also exposes table lock information that shows which table handles the server has open, how they are locked, and by which sessions.”
To check who holds the metadata lock in MySQL 5.7, We have to enable global_instrumentation and wait/lock/metadata/sql/mdl.
Below is the example to enable global_instrumentation and wait/lock/metadata/sql/mdl
mysql> UPDATE performance_schema.setup_consumers SET ENABLED = 'YES' WHERE NAME = 'global_instrumentation'; Query OK, 0 rows affected (0.00 sec) Rows matched: 1 Changed: 0 Warnings: 0 mysql> UPDATE performance_schema.setup_instruments SET ENABLED = 'YES' WHERE NAME = 'wait/lock/metadata/sql/mdl'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0
Once global_instrumentation and wait/lock/metadata/sql/mdl are enable, below query will show the locks status on connections.
mysql> SELECT OBJECT_TYPE, OBJECT_SCHEMA, OBJECT_NAME, LOCK_TYPE, LOCK_STATUS, THREAD_ID, PROCESSLIST_ID, PROCESSLIST_INFO FROM performance_schema.metadata_locks INNER JOIN performance_schema.threads ON THREAD_ID = OWNER_THREAD_ID WHERE PROCESSLIST_ID <> CONNECTION_ID(); +-------------+---------------+-------------+---------------------+-------------+-----------+----------------+------------------------------------------+ | OBJECT_TYPE | OBJECT_SCHEMA | OBJECT_NAME | LOCK_TYPE | LOCK_STATUS | THREAD_ID | PROCESSLIST_ID | PROCESSLIST_INFO | +-------------+---------------+-------------+---------------------+-------------+-----------+----------------+------------------------------------------+ | TABLE | sbtest | sbtest1 | SHARED_READ | GRANTED | 29 | 4 | NULL | | GLOBAL | NULL | NULL | INTENTION_EXCLUSIVE | GRANTED | 30 | 5 | alter table sbtest1 add key idx_pad(pad) | | SCHEMA | sbtest | NULL | INTENTION_EXCLUSIVE | GRANTED | 30 | 5 | alter table sbtest1 add key idx_pad(pad) | | TABLE | sbtest | sbtest1 | SHARED_UPGRADABLE | GRANTED | 30 | 5 | alter table sbtest1 add key idx_pad(pad) | | TABLE | sbtest | sbtest1 | EXCLUSIVE | PENDING | 30 | 5 | alter table sbtest1 add key idx_pad(pad) | | TABLE | sbtest | sbtest1 | SHARED_READ | PENDING | 31 | 6 | select count(*) from sbtest1 | +-------------+---------------+-------------+---------------------+-------------+-----------+----------------+------------------------------------------+ 6 rows in set (0.00 sec)
Here PROCESSLIST_ID 4 is GRANTED and other PROCESSLIST_IDs are in PENDING state.Conclusion
Best-practice when running any DDL operation, even with performance schema changes in 5.7, it to make sure to check processlist for presence of MDL waits, check SHOW ENGINE INNODB STATUS for long active transactions. Kill DDL operation while resolving the MDL issue so as to prevent query pileup. For a temporary fix implement pt-kill or wait_timeout. Review and fix application code/scripts for any uncommitted transactions to solve metadata lock issue.
- 10 MB storage - free
- 25 MB storage - free
- 5 GB - $175/month
- 20 GB - $900/month
- 50 GB - $2000/month
We covered this topic from a different angle a month ago in convertign excel to apex and printing from apex. Both of these blog entries talk about how to use Schema as a Service to solve a problem. Some good references on Schema as a Service can be found atsafari books subscription to get these books on demand and on my iPad for reading on an airplane.
When we login to access the database we are asked to present the schema that we created, a username, and a password. Note in this example we are either using the external free service apex.oracle.com or the Oracle corporate service for employees apex.oraclecorp.com. The two services are exactly the same. As an Oracle employee I do not have access to the public free service and am encouraged to use the internal service for employees. The user interface is the same but screen shots will bounce between the two as we document how to do things.
Once we login we see a main menu system that allows us to manage application, manage tables in the database, to team development, and download and install customer applications.
The Object Browser allows us to look at the data in the database. The SQL Commands allow us to make queries into the database. The SQL Scripts allows us to load and save sql commands to run against the database. Utilities allows us to load and unload data. The REST ful service allows us to define html interfaces into the database. If we look at the Object Browser, we can look at table definitions and data stored in tables.
We can use the SQL Commands tab to execute select statements against a table. For example, if we want to look at part number B77077 we can select it from the pricelist by matching the column part_number. We should get back one entry since there is only one part for this part number.
If we search for part number B77473 we get back multiple entries that are the same part number. This search returns six lines of data with more data in other columns than the previous select statement.
The SQL Scripts allows you to load scripts to execute from your laptop or desktop. You can take queries that have run against other servers and run them against this server.
Up to this point we have looked at how to write queries, run queries, and execute queries. We need to look at how to load data so that we have something to query against. This is done in the Utilities section of the Schema as a Service. Typically we start with a data source either as an XML source or an Excel spreadsheet. We will look first at taking an Excel spreadsheet like the one below and importing it into a table.
Note that the spreadsheet is well defined and headers exist for the data. We do have some comments at the top that we need to delete so that the first row becomes the column names in our new table. Step one is to save the sheet as a comma separated value file. Step two is to edit this file and delete the comment and blank like. Step three is to upload the file into the Schema as a Service web site.
At this point we have a data source loaded and ready to drop into a table. The user interface allows us to define the column type and tries to figure out if everything is character strings, numbers, or dates. The tools is good at the import but typically fails at character length and throws exceptions on specific rows. If this happens you can either manually enter the data or re-import the data into an existing table. Doing this can potentially cause replication of data so deleting the new table and re-importing into a new table might be a good thing. You have to play at this point to import your data and get the right column definitions to import all of your data.
Alternatively we can import xml data into an existing table. This is the same process of how we backup our data by exporting it as xml.
At this point we have loaded data using a spreadsheet or csv file and an xml file. We can query the database by entering sql commands or loading sql scripts. We could load data from a sql script if we wanted but larger amounts of data needs to be imported with a file. Unfortunately, we can not take a table from an on-premise database or an rman backup and restore into this database. We can't unplug a pdb and plug it into this instance. This service does have limitations but for free or less than $200 per month, the service provides a lot of functionality.
To create an application to read and display our data, we must create an application. To do this we go into the application development interface of Schema as a Service.
We select the Application Builder tab at the top and click on Create icon. This takes us to a selection of what type of application to build. We are going to build a desktop application since it has the most function options. We could have easily selected the mobile option which formats displays for a smaller screen format.
We have to select a name for our application. In this example we are calling it Sample_DB since we are just going to query our database and display contents of a table.
We are going to select one page. Note that we can create multiple pages to display different information or views into a table. In our previous blog entry on translating excel to apex we created a page to display the cost of archive and the cost of storage on different pages. In this example we are going to create one page and one page only.
If we have shared components from other apps or want to create libraries that can called from other apps, we have the option at this point to define that. We are not going to do this but just create a basic application to query a database.
We can create a variety of authorization sources to protect our data. In this example we are going to allow anyone to read and write our table. We select no authorization. We could use an external source or look at the user table in the database to authenticate users. For this example, we will leave everything open for our application.
We get a final confirmation screen (not shown) and create the application. When the create is finished we see the following screen that lets us either run the application or edit it.
If we click on the Home icon we can edit the page. This screen is a little daunting. There are to many choices and things that you can do. There are different areas, breadcrumbs for page to page navigation, and items that you can drop into a screen. In this example we are going to add a region by hovering the mouse over the Content Body and right clicking the mouse. This allows us to create a new region in the body of our page.
Note that a new region is created and is highlighted in the editor. We are going to edit the content type. We have a wide variety of options. We could type in static text and this basically becomes a static web page. Note that we could create a graph or chart. We could create a classic report. We could create a form to submit data and query a table. We will use the interactive report because it allows us to enter sql for a query into our table.
In this example we will enter the select statement in the query box. We could pop this box into another window for full screen editing. For our example we are doing a simple select * into our table with select * from pricelist.
When we click the run button at the top right we execute this code and display it in a new window. We can sort this data, we can change the query and click Go. This is an interactive from to read data from our table. If we wanted to restrict the user from reading all of the data we would have selected a standard report rather than an interactive report.
The final part of our tutorial is creation of a REST api for our data. We would like to be able to go to a web page and display the data in the table. For example, if we want to look at the description of part number B77077 it would be nice to do it from a web page or get command at the command line. To do this we go to the SQL Workshop tab and click the RESTful Service icon at the top right.
Again, this screen is a little daunting. We get a blank screen with a create button. Clicking the create button takes to a screen where we need to enter information that might not be very familiar.
The screen we see is asking us to enter a name for our service, a template, and a resource handler. Looking at this for the first time, I am clueless as to what this means. Fortunately, there is an example on how to enter this information if you scroll down and click on the Example button.
If we look at the example we see that the service name is the header that we will hit from our web page.
In our example we are going to create a cloud RESTapi where we expose the pricelist. In this example we call the service cloud. We call the resource template pricelist and allow the user to pass in a part_number to query. In the resource handler we go a get function that does a select from the table. We could pass in the part number that we want to read but for simplicity we ignore the part number and return all rows in the table. Once we click save, we have exposed our table to a web query with no authentication.
Once we have created our REST service we can query the database from a we browser using the url of the apex server/pls/apex/(schema name)/pricelist/(part number). In this example we go to apex.oraclecorp.com/pls/apex/parterncloud/pricelist/B77077. It executes the select statement and returns all rows in the table using JSON format.
In summary, we are able to upload, query, and display datatbase data using http and https protocols. We can upload data in xml or csv format. We can query the database using web based tools or REST interfaces. We can display data either by developing a web based program to display data or pull the data from a REST interface and get the data in JSON format. This service is free if your database size is small. If we have a larger database we can pay for the service as well as host the application to read the data. We have the option to read the data from a REST interface and pull it into an application server at a different location. We did not look at uploading data with a PUT interface through the REST service but we could have done this as well. Up next, how do we implement this same service in AWS or Azure.
A recent post on the OTN database forum reminded me how easy it is to forget to keep repeating a piece of information after the first couple of hundred times you’ve explained it. No matter how “intuitively obvious” it is for one person, it’s new to someone else.
Here’s an execution plan that raised the question that prompted this note – it comes from calling dbms_xplan.display_cursor() with the ‘allstats last’ format option after enabling rowsource execution statisics (using hint gather_plan_statistics, or setting parameter statistics_level to all, or setting hidden parameter “_rowsource_execution_statistics” to true):
----------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | ----------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 1989 |00:00:04.96 | 9280 | 897 | | 1 | NESTED LOOPS OUTER | | 1 | 2125 | 1989 |00:00:04.96 | 9280 | 897 | | 2 | NESTED LOOPS OUTER | | 1 | 2125 | 1989 |00:00:04.93 | 9271 | 895 | | 3 | NESTED LOOPS OUTER | | 1 | 2125 | 1989 |00:00:00.03 | 5732 | 0 | | 4 | COLLECTION ITERATOR PICKLER FETCH | | 1 | 1989 | 1989 |00:00:00.01 | 0 | 0 | |* 5 | TABLE ACCESS BY INDEX ROWID | TABLE1 | 1989 | 1 | 1178 |00:00:00.03 | 5732 | 0 | |* 6 | INDEX RANGE SCAN | IDX_TABLE1 | 1989 | 2 | 2197 |00:00:00.02 | 3545 | 0 | | 7 | TABLE ACCESS BY INDEX ROWID | TABLE2 | 1989 | 1 | 1178 |00:00:03.26 | 3539 | 895 | |* 8 | INDEX UNIQUE SCAN | IDX_TABLE2_PK | 1989 | 1 | 1178 |00:00:03.25 | 2359 | 895 | | 9 | TABLE ACCESS BY INDEX ROWID | TABLE3 | 1989 | 1 | 0 |00:00:00.03 | 9 | 2 | |* 10 | INDEX UNIQUE SCAN | IDX_TABLE3_PK | 1989 | 1 | 0 |00:00:00.03 | 9 | 2 | -----------------------------------------------------------------------------------------------------------------------------------
Many people have heard about “tuning by cardinality feedback” (a term first used, I think, by Wolfgang Breitling many years ago), and it’s fairly common knowledge that this requires you to compare the predicted (estimated) E-rows with the actual A-rows generated by a line of the plan. A critical detail that often fails to appear when this point is being explained is that there is a very important difference between the meaning of E-rows and A-rows. Although this is a point I make very clear in my presentations on how to read execution plans, it was only when I found myself explaining it to Tony Hasler a couple of years ago that I realised that it’s not commonly known and that, in casual conversation, it’s easy to overlook the fact that it’s something that you might need to mention explicitly. So here’s the critical detail:
- E-Rows is the number of rows the optimizer is expecting the operation to produce each time it is called
- A-Rows is the cumulative number of rows summed across all calls to the operation
As a consequence of this difference any comparison you make should not be comparing A-Rows with E-rows, it should be comparing A-Rows with E-Rows * Starts so, for example, operation 6 in the plan above (index range scan idx_table1) shows a reasonable estimate because 1,989 starts at 2 rows per start is fairly close to the final 2,197 rows produced (especially when you consider that there’s a lot of integer rounding going on to display the values).
An interesting article from Sarah – much good advice there!
VR was the big thing at the Samsung Developer Conference, and one of the points that got driven across, both in the keynotes and in other talks throughout the day, was that VR is a fundamentally new medium—something we haven’t seen since the motion picture.
Injong Rhee, the executive VP of R&D for Software and Services, laid out some of VR’s main application areas: Gaming, Sports, Travel, Education, Theme Parks, Animation, Music, and Real Estate. Nothing too new here, but it is a good summary of the major use cases, and they echo what we’ve heard in our own research.
He also mentioned some of their biggest areas for innovation: Weight, dizziness, image quality, insufficient computing power, restricted mobility, limited input control. For anyone who’s tried the Gear VR and had to use the control pad on the side of the visor, I think we can agree it’s not ideal for long periods of time. And while some VR apps leave me and others with no nausea at all, other apps, where you’re moving around and stepping up and down, can certainly cause some discomfort. I’m curious to see how some of those problems of basic human physiology can be overcome.
A fascinating session after the keynote was with Brett Leonard, who many years ago directed Lawnmower Man, a cautionary tale about VR, which despite the bleak dystopic possibilities it portrayed, inspired many of today’s VR pioneers. Leonard appeared with his brother Greg, a composer, and Frank Serafine, an Oscar-award winning sound designer who did the sound for Lawnmower Man.
Brett, Greg, and Frank made a solid case for VR as a new medium that has yet to be even partially explored, and will surely have a plethora of new conventions that storytellers will need to work with. We’ve become familiar with many aspects of the language of film, such as things happening off screen but are implied to be happening. But with the 360-degree experience of VR, there’s no longer that same framing of shots, or things happening off the screen. The viewer chooses where to look.
Brett also listed his five laws of VR, which cover some of his concerns, given that it is a powerful medium that could have real consequences for people’s minds and physiology, particularly developing children. His laws, very paraphrased are:
- Take it seriously.
- VR should promote interconnecting with humanity, not further reinforcing all the walls we already have, and that technology so far has helped to create.
- VR is its own reality.
- VR should be a safe space—there are a huge amount of innovations possible, things that we haven’t been able to consider before. This may be especially so for medical and psychological treatments.
- VR is the medium of the global human.
Another interesting part of the talk was about true 360-degree sound, which Serafine said hadn’t really been done well before, but with the upcoming Dolby Atmos theaters, finally has.
Good 360-degree sound, not just stereo like we’re used to, will be a big part of VR feeling increasingly real, and will pose a challenge for VR storytelling, because it means recording becomes more complex, and consequently editing and mixing.
Samsung also announced their effort for the connected car, with a device that looks a lot like the Automatic (previously blogged about here) or the Mojio. It will offer all the features of those other devices—driving feedback that can become a driver score (measuring hard braking, fast accelerating, hard turns, and the like), as well as an LTE connection that allows it to stay connected all the time and serve as a WiFi hotspot. But Samsung adds a little more interest to the game with vendor collaborations, like with Fiat, where you can unlock the car, or open the trunk from your app. This can’t currently be done with other devices.
It should come out later this year, and will also have a fleet offering, which should appeal to enterprise companies. If they have more of these exclusive offering because of Samsung’s relationships with various vendors, maybe it will do better than its competitors.Possibly Related Posts:
- Android Adds Mode for Apps on Large Screens
- Our Future Colleagues have MySpace Accounts
- Samsung Developers Conference 2016: A Developer’s Perspective
- Musings on Samsung Developer Conference 2014
- Fun with the OOW Mix Session Voting Data
Now that we have looked at one of our use cases and the cost savings associated with using the cloud for peak utilization and reducing the cost of on servers and software in our data center, let's dive into the pricing and configuration of Database as a Service (DBaaS) offered by Oracle in the public cloud services. If we click on the Platform -> Database menu we see the following page.
If we scroll down to the bottom we see that there are effectively three services that we can use in the public cloud. The first is Database Schema as a Service. This allows you to access a database through a web interface and write programs to read and present data to the users. This is the traditional Application Express interface or APEX interface that was introduced in Oracle 9. This is a shared service where you are given a database instance that is shared with other users. The second service is Database as a Service. This is the 11g or 12c database installed on a Linux installation in the cloud. This is a full installation of the database with ssh access to the operating system and sqlplus access to the database from a client system. The third service is Exadata as a Service. This is the Oracle database on dedicated hardware that is optimized to run the Oracle database.
The Schema as a Service is also known as Application Express. If you have never played with apex.oracle.com, click on the link and register for a free account. You can create an instance, a database schema, and store upto 10 MB or 25 MB of data for free. If you want to purchase a larger storage amount it is sold in 5 GB, 20 GB, or 50 GB increments.
The 10 or 25 MB instance is free. The 5 GB instance is $175/month. The 20 GB is $900/month, and the 50 GB is $2,000/month.
Tomorrow we will dive a little deeper into Schema as a Service. In summary, this is a database instance that can contain multiple tables and has an application development/application web front end allowing you to access the database. You can not attach with sqlplus. You can not attach with port 1521. You can not put a Java or PHP front end in front of your database and use it as a back end repository. You can expose database data through applications and REST api interfaces. This instance is shared on a single computer with other instances. You can have multiple instances on the same computer and the login give you access to your applications and your data in your instance.
The Database as a Service (DBaaS) is slightly different. With this you are getting a Linux instance that has been provisioned with a database. It is a fully deployed, fully provisioned database based on your selection criteria. There are many options when you provision DBaaS. Some of the options are virtual vs full instance, 11g vs 12c, standard edition vs enterprise edition vs enterprise edition high performance vs enterprise edition extreme performance. You need to provide an expected data size and if you plan on backing up the data and a cloud object repository if you do. You need to provide ssh keys to login as oracle or opc/root to manage the database and operating system. You also need to pick a password for the sys/system user inside the database. Finally, you need to pick the processor and memory shape that will run the database. All of these options have a pricing impact. All of these options effect functionality. It is important to know what each of these options means.
Let's dive into some of these options. First, virtual vs full instance. If you pick a full instance you will get an Oracle Enterprise Linux installation that has the version of the database that you requested fully installed and operational. For standard installations the file system is the logical volume manager and the file system is provisioned across four file systems. The /u01 file system is the ORACLE_HOME. This is where the database binary is installed. The /u02 file system is the +DATA area. This is where table extents and table data is located. The /u03 file system is the +FRA area. This is where backups are dropped using the RMAN command which should run automatically every night for incremental backups and 2am on Sunday morning for a full backup. You can change the times and backup configurations with command line options. The /u04 area is teh +RECO area. This is where change logs and other log files are dropped. If you are using Data Guard to replicate data to another database or from another database, this is where the change logs are found.
If you pick a virtual instance you basically get a root file system running Oracle Enterprise Linux with a tar ball that contains the oracle database. You can mount file systems as desired and install the database as you have it installed in your data center. This configuration is intended to mirror what you have on-premise to test patches and new features. If you put everything into /u01 then install everything that way. If you put everything in the root file system, you have the freedom to do so even though this is not the recommended best practice.
The question that you are not asked when you try to create a DBaaS is if this service is metered or non-metered. This question is asked when you create your identity domain. If you request a metered service, you have the flexibility to select the shapes that you want and if you are billed hourly or monthly. The rates are determined by the processor shape, amount of memory, and what database option you select (standard, enterprise, high performance, or extreme performance). More on that later. With the metered option you are free to stop the database (but not delete it) and retain your data. You suspend the consumption of the database license but not the compute and storage. This is a good way of saving a configuration for later testing and not getting charged for using it. Think of it as having an Uber driver sit outside the store but not charge you to sit there. When you get back in the car the charge starts. A better analogy would be the Cars2Go. You can reserve a car for a few hours and drive it from Houston to Austin. You park the car in the Cars2Go parking slot next to the convention center and don't pay for parking. You come out at the end of your conference, swipe your credit card and drive the car back to Houston. You only get charged for the car when it is between parking lots. You don't get charged for it while it is parked in the reserved slot. You pay a monthly charge for the service (think of compute and storage) at a much lower rate. If you think of a non-metered service as renting a car from a car rental place, you pay for the car that they give you and it is your until you return it to the car rental place. You can't not pay for the car while you are in your convention as with Card2Go. You have to pay for parking at the hotel or convention center. You can't decide half way into your trip that you really need a truck instead of a car or a mini-van to hold more people and change out cars. The rental company will end your current agreement and start a new one with the new vehicle. Non-metered services are similar. If you select an OC3M shape then you can't upgrade it to an OC5 to get more cores. You can't decide that you need to use the diagnostics and tuning and upgrade from enterprise edition to enterprise edition high performance. You get what you started with and have 12 months to consume the services reserved for you.
The choice of 11g or 12c is a relatively simple one. You get 188.8.131.52 running on Oracle Enterprise Linux 6.6 or you get 184.108.40.206 running on Oracle Enterprise Linux 6.6. This is one of those binary questions. You get 11g or 12c. It really does not effect any other question. It does effect features because 12c has more features available to it but this choice is simple. Unfortunately, you can't select 220.127.116.11 or 10.whatever or 9.whatever. You get the latest running version of the database and have an option to upgrade to the next release when it is available or not upgrade. Upgrades and patches are applied after you approve them.
The next choice is the type of database. We will dive into this deeper in a couple of days. The basic is that you pick Standard Edition or Enterprise Edition. You have the option of picking just the base Enterprise Edition with encryption only, with most of the options in the High Performance Option, or all of the options with Extreme Performance Option. The difference between High Performance and Exterme Performance is the Extreme included Active DataGuard, In-Memory options, and Real Application Clustering options. Again, we will dive into this deeper in a later blog entry.
The final option is the configuration of the database. I wanted to include a screen shot here but the main options that we look at are the CPU and memory shape which dictates the database consumption cost as well as the amount of storage for table space (/u02) and backup space (/u03 and /u04). There are additional charges above 128 GB for table storage and for backups. We will not go into the other options on this screen in this blog entry.
In summary, DBaaS is charged on a metered or un-metered basis. The un-metered is a lower cost option but less flexible. If you know exactly what you need and the time that it is needed, this is a better option. Costs are fixed. Expenses are predictable. If you don't know what you need, metered service might be better. It gives you the option of starting and stopping different processor counts, shutting off the database to save money, and select different options to test out different features. Look at the cost option and a blog that we will do in a few days analyzing the details on cost. Basically, the database can be mentally budgeted as $3K/OCPU/month for Enterprise Edition, $4K/OCPU/month for High Performance, and $5K/OCPU/month for Extreme Performance. Metered options typically cross over at 21 days. If you use metered service for more than 21 days your charges will exceed this amount. If you use it for less, it will cost less.
The Exadata as a Service is a special use case of Database as a Service. In this service you are getting a quarter, half, or full rack of hardware that is running the database. You get dedicated hardware that is tuned and optimized to run the Oracle database. Storage is dedicated to your compute nodes and not one else can use these components. You get 16, 56, or 112 processors dedicated to your database. You can add additional processors to get more database power. This service is available in a metered or non-metered option. All of the database options are available with this product. All of the processors are clustered into one database and you can run one or many instances of a database in this hardware. With the 12c option you get multi-tenant features so that you can run multiple instances and manage them with the same management tools but give users full access to their instance but not other instances running on the same database.
Exadata cost for metered services
Exadata cost for non-metered services
In summary, there are two options for database as a service. You can get a web based front end to a database and access all of your data through http and https calls. You can get a full database running on a Linux server or Linux cluster that is dedicated to you. You can consume these services on a an hourly, monthly, or yearly basis. You can decide on less expensive or more expensive options as well as how much processor, memory, and storage that you want to allocate to these services. Tomorrow, we will dive a little deeper into APEX or Schema as a Service and look at how it compares to services offered by Amazon and Azure.
The vibe was felt all over the MGM compound, but even more so in the demo and exhibit area. This is where video mavens Ruth Kale-Fok, Maureen Boctor, Brian Mock, David Hope-Ross, Martin Taylor, and Kathryn Perry were working their camera and interview magic.
You know, some people freeze when they are asked to speak on camera.
Generally, analysts and influencers don't fall into that camp.
True to form, the analysts and influencers at the conference were more than happy to share their viewpoints on the importance of customer experience and the role Oracle is playing with its end-to-end solution.
Watch this short video starring Dr. Natalie Petouhoff from Constellation Research, Keith Dawson and Aphrodite Brinsmead from Ovum, Rebecca Wetterman from Nucleus Research, and Mark Smith from Ventana Research.
They also captured some great quotes from conference attendees in these video clips:
- What is the most important thing about CX by Aman Bedi, CTO, TAIStech
- Why are customer experiences important to business by Layne Nelson, Oracle CX Cloud CoE, N.A., IBM
- How security of data in the cloud enhances the customer experience by David Canellos, Sr. VP, Advanced Technology Group, Blue Coat Systems
- How has the importance of the customer changed in the last decade by Sydnée Reese, Marketing Manager, Diverse: Issues In Higher Education
- Primary motivation for coming to the Modern CX conference by Cathy Ellico, Sr. Manager, Marketing Communications, NADA
- How have customer expectations changed in the last decade by Shawn McKenna, President, ProLink Solutions
- What every service executive should be thinking about by Andy Bashkin, VP of Sales, OpenMethods
Many turn to slashing expenses as a sure way to deliver desired earnings results. Others try to build new products or lower prices to lure customers from the competition. But what they miss is that their success, and that of their company, lies with their existing customers. The American Customer Satisfaction Index says:
Companies with happier customers give five times the return than companies with unhappy customers.
A 5x return would validate any CEO―and thrill his or her board members and shareholders. But pushing good customer service to get happier customers isn’t exactly jazzy or unique for a nervous CEO who needs to show results quickly.
However, the research doesn’t lie; nor do our personal experiences. In his keynote, Mark shared his frustration about his bank and his US-based cell phone service provider. Both companies have siloed processes which translate into unnecessary and irritating interactions―and plenty of lost opportunities to engage and delight customers like him.
So Mark’s advice to the CEOs and professionals in the audience was to step up their customer experience games with a suite of integrated applications. “There are lots of CX providers so it’s easy to go down the path of more silos; you’ll get some benefits, innovation, but the suite is the real benefit,” said Mark. The recipe for success is really pretty simple―get to know your customers’ wants and needs and serve them well.
Oracle is the only vendor with an integrated suite.
Oracle is the leader.
Oracle is committed to providing best-of-breed integrated solutions in each area.
Oracle can help every CEO move closer to that 5x return.
Author: Mitchell Palski – Oracle Fusion Middleware Specialist
Let’s start off with this? Why would you upgrade your Oracle Forms applications? Oracle Forms Builder and Oracle Forms Services have been upgraded to simplify the development and deployment of Forms applications on the Web. In a world where very few applications stand-alone and need to integrate into their Enterprise, Oracle recognizes the importance of being able to persist your legacy Forms applications into the future while enabling your Service-Oriented Architecture to leverage their capabilities. The future of Forms includes improving the Java-based Web user interface and extending product "openness" by allowing Java integration on all three tiers.
Aside from improving your traditional Oracle Forms offering, there is a new consideration that many of us have never had to deal with in the past – mobile access. Oracle Forms and Reports are such convenient tools for developing applications for our mobile workforce, but yet they’ve been restricted to the use of laptops and desktop machines. Here’s the good news – Oracle partner Auraplayer has the skillsets and the software to help your organization quickly and efficiently overcome that obstacle.
Why did I say “skillset”? Well, Auraplayer has some really talented people in their management ranks that have been working with Oracle’s Fusion Middleware technology for a long time. One of the challenges that I had when I first heard about Auraplayer was understanding their value for my customers, so I talked to CEO Mia Urman and she helped explain to me that Auraplayer isn’t just selling a product (you’ll hear about that in a minute), they’re also helping customers upgrade. Oracle Forms has been around for so long that there aren’t a ton of projects out there that have the same personnel on staff as when they first started. Auraplayer brings the industry expertise to mitigate your risks and deliver a successful Forms upgrade so your team can start taking advantage of all the new features 12c has to offer.
Upgrades aren’t the most exciting projects in the world, so woopty-doo right? Well, the reason that I’m excited is because if you’re working with Auraplayer to upgrade your Forms application why not let them tell you about their Mobile offering while they’re their? I won’t try to go through all of their features in this post today, but let me give you the highlights (from my perspective):
- You don’t have to change your Forms applications. Re-writing code to enable a mobile UI can be costly, it can take a long time to complete, and it requires support from your development team after deployment.
- Auraplayer will REST-enable your Forms application so you can consume services across the Enterprise, making integration easy and mobile development fast and efficient.
- Couple Auraplayer with Oracle Mobile Cloud Service and you have a complete project plan from start to finish.
- Define and enable processes
- Automatically generate services
- Develop UI and consume services – works on any mobile device
So, do your end users have smart phones and tablets? Yes they do. Do you need up upgrade your legacy Forms apps and get those users actively using them in the field? You tell me!
New York City Metropolitan Transportation Authority (NYC MTA) figured out a way to use Auraplayer to “work smarter, not harder” by leveraging Auraplayer. Read about their success here: https://blogs.oracle.com/mobile/entry/new_york_mta_mobile_cloudIf you want to learn more, check out these videos by Grant Ronald:
- Mobilizing Oracle Forms with Oracle MCS and Auraplayer
- The Architecture for Mobilizing Oracle Forms
- Recording Web Services for Mobilizing Oracle Forms
- A Custom Application Demo of Mobilizing Oracle Forms
- Oracle Cloud Management Pack for Oracle Database
- Oracle Cloud Management Pack for Oracle Fusion Middleware
- Oracle Cloud Management Pack for Testing
Without these packs we cannot use any self-service provisioning features.
With DBLM you can use Deployment Procedure to create/clone new DB’s but you cannot use self-service portal.
The traditional DB LifeCycle Management provides all the management of DB but there are some exception of features like below:
Note: The ability to create 'new' full clones is licensed under Database Lifecycle Management Pack for Oracle Database, but the additional 'refresh' function requires licensing the Cloud Management Pack for Oracle Database.
Let us discuss a Scenario where Customer may require Hybrid Cloud Management capabilities:
Customer OMS is running on-premise and they have some cloud services subscriptions also.
If they just want to manage their cloud service targets they can use Hybrid Agent to manage making sure they have the traditional license pack for those targets to have single pane of view of all their on-premise traditional targets and Cloud targets.
They can also use the REST API to manage and view their cloud services which comes with cloud service offerings.
If the customer is not going to build any in-house private cloud service ( MWaaS, DBaaS, IaaS, PaaS, SCHaaS, SnapClone etc…) using OEM, they may not require Cloud Management packs.
Cloud Management pack provides top up features to their traditional packs.
Oracle Cloud Management Pack for Oracle DatabasePrerequisitesThe Oracle Cloud Management Pack for Oracle Database requires the Database Lifecycle Management Pack for Oracle Database.
As of now to to raise a request to create an instance of DBCS or JCS on Oracle Cloud you have to use REST API or Cloud UI.
Hope we will soon have integration in OEM cloud management pack capability to raise service request directly against Oracle Cloud Service ( Cloud service hosted at Oracle Data center or at customer as Private Cloud Service)
which will create an instance in cloud and will discover the assets and manage in OEM .
Content contributed by Balaji Pattabhiraman
A few months ago, I wrote in this blog about a feature in PeopleTools called Simplified Analytics. This is really transformative technology that puts tremendous power in the hands of end users, enabling them to perform ad-hoc analytics right in context of their transactions and business processes. It makes reporting timely, secure, and relevant.
Let's expand on that concept and take advantage of some other PeopleTools featues. In this case, imagine I am a manager or subject area expert. I've created several simplified analytics that I want to make available to my team for use in their daily work. Simplified analytics enables me to create a tile from any analytic I create. Those tiles can then be placed on any Fluid home page or Dashboard, which is another new PeopleTools feature. Lets see an example of this.
As a business administrator I've created several simplifed analytics for the My Team page. These are available in the My Analytics tab in Related Information.
Note that each tab on this page--Summary, Performance, Compensation, and Leave Balances--contains different analytics relevant to each topic. I've created these analytics to provide better decision support for these processes.
This calls for a quick review of an important part of the Simplified Analytics process. When creating or editing the analytic, I have the option of publishing the analytic so it can be used by others. (See this post for more info on creating a Simplified Analytic.)
When I choose to publish, I have the option of publishing to the My Analytics section of the Related Actions frame or publishing to the Tile Repository. Since I'll be creating an analytic dashboard, I'll save to the repository.
Now I can go to the Personalize option from the menu in any home page. From here I can create my new analytic home page. (This could be created as a Dashboard as well.)
In this case, I'll create a new home page called Team Analytics. This will provide a single access point for our group of analytics, making them available in a convenient form to all team members.
Now that I've created the new home page, I can add tiles representing the analytics that were created previously.
I'll add several analytic tiles and make this home page a robust and valuable analytic tool.
Once I save the page it is displayed. Here you see the home page with all the tiles I've added.
Now let's publish the home page and make it available to the team so all can benefit from it. (Note: If I don't publish the home page to other users I can still use it as a personal home page. One might do this for analytics that are used exclusively by me.)
Enter a name and label and any other field data necessary. In this case I'm making it public, but in many cases you will want to make a page like this available only to a particular role or permission list.
Now when someone on our team logs in, (in this case Rosanna) she will have access to the new Team Analytics page.
Note that the data from the analytics correspond to Rosanna's team. If the same analytics were published to a different group, the data would be relevant to that group. The data, context, and security are determined by the PS Query that forms the basis of the analytic.
This illustrates how you can extend the power of Simplified Analytics to make valuable decision support available to teams, not just individuals.
A colleague asked if there was a way to do column level dependency tracking recently. He wanted to know for a given view, which tables and the columns on those tables, it was dependent upon, without, of course, reading through the code.
I was vaguely aware that since 11gR1 Oracle has been tracking fine grained (column) dependencies, but couldn’t find a way of seeing the details stored, until I found this interesting article from Rob Van Wijk:
I passed the details on to our DBA who implemented it and it seemed to work, for us. Your mileage may vary, of course.
Some comments on Rob’s blog post, bearing in mind, of course, that it was written in 2008 and refers to 11gR1:
- D_ATTRS contains values other than “000100000A”. I’ve observed this in a basic 12c install and a production 11gR2 install
- D_ATTRS is commented in $ORACLE_HOME/rdbms/admin/dcore.bsq as “/* Finer grain attr. numbers if finer grained */”
- D_REASON is commented in $ORACLE_HOME/rdbms/admin/dcore.bsq as “/* Reason mask of attrs causing invalidation */”. On my basic 12c installation, all rows contain NULL for this value although on a production 11gR2 database I observed a handful of rows with values in this column.
- Noted from the comments against Rob’s article is the opportunity to vote for this feature on OTN here
- Oracle Database Cloud Migration Options (Oracle Partner Hub: ISV Migration Center Team)
via Oracle Partner Hub: ISV Migration Center Team http://ift.tt/1AAiVSD
My colleague asked me yesterday how to enable copy and paste in the command line SQL*Plus window on Windows 7 – a simple enough task…
On the shortcut that starts the command line version of SQL*Plus, right click and bring up the Properties dialog. Nagivate to the Options tab and make sure the QuickEdit mode is checked on, as below:
Now start SQL*Plus and you’ll find that you can hold the left mouse button down whilst dragging a selection area and then pressing return copies the selected text, whilst pressing the right mouse button pastes the copied text.
If you’d prefer to read this from a Microsoft source, try here, where other methods of setting this up are detailed as well as enabling the Autocomplete facility.
After a whirlwind day at Modern CX, I hurried my way back up to San Francisco for the last day of the Samsung Developers Conference 2016. The morning started out exciting with a giveaway gift of the Samsung Gear 360 Camera.
Oh i have plans for you
We share our skills to maximize your revenue!
When we pull down the Platform menu we see that there are different areas that we can dive into.
Data management is the first area that we will review. This is basically a way to aggregate and look at data. We can store data in a database, store on-premise databases into the cloud, store data in NoSQL repositories, and do analytics on a variety of data with Big Data Preparation and Big Data services. All of these involve pulling data into a repository of some type and performing queries against the repository. The key difference is the way that the data is stored, how we can ask questions, and the results that we get back. At this point we will not dive into any of these deeply but at a later point dive deep into the database and database backup.
The Application Development is moving farther away from the technology of storing data and moving closer to how we present data to users. The Java platform, for example, allows us to do things like create a shopping cart or hosting more complex applications in a Java repository or container. The Mobile Cloud Service allows us to dive into existing applications and present a user interface to iPhones, Android Phones, and tablets. The idea is to customize existing web and fat clients into a mobile format that can be consumed on mobile devices. The Messaging Cloud Service is a messaging protocol that allows for transactions in the cloud. If you are looking at connecting different cloud services together it allows you to serialize the communication between vendors for a true transactional experience. The Application Container Cloud is a lightweight Java container allowing you to upload and run java applications but without access to the operating system. This is a shared multi-tenant version of a WebLogic server. The Developer Cloud Service is a DevOps integration for the Java and Database services. This service is an aggregation of public domain components used to develop microservices at the database or java layer. The Application Builder Cloud Service is a cloud based REST api development interface allowing you to integrate with Application software in the Oracle Cloud as well as other Clouds. The API Catalog is a way of publishing the REST apis that you have and expose them to your customers.
The Content and Process Cloud Services are an aggregation of services that address group communications as well as business process flow. The Documents Cloud Service is a way of file sharing on the web. The Process Cloud Service is an extension that allows you to launch business processes (think Business Process Manager or BPM) in the cloud. The Sites Cloud Service is a web portal interface that takes documents and processes and aggregates them into a single cloud site allowing you to take a wiki like presentation but put business processes into the presentation. The Social Network Cloud Service allows you to integrate social network services like Facebook and Twitter into your web presence. It allows you to integrate these services as well as search these repositories for information relating to your company.
The Business Analytics part of Platform services provides data visualization and analytic tools as well as data aggregation utilities. The Business Intelligence component is the traditional BI package that allows users to create custom queries into your database. The Big Data Preparation allows you to aggregate data from a variety of sources into a Big Data repository. The Big Data Discovery allows you to look at your data in a variety of ways and generate reports based on your data and views of data. The Data Visualization Cloud Service allows you to view and analyze your data from different perspectives. This is similar to the BI and Big Data but looks at data slightly differently. The Internet of Things Cloud Service allows you to aggregate monitoring and measuring devices into a repository.
The Cloud Integration part of Platform services is the traditional data aggregation tools from other repositories. The Integration Cloud Service allows you to aggregate traditional SaaS vendors to unify fields like how a customer is defined or what data elements are incorporated into a purchase order. The SOA Cloud Service is implementation of the Oracle SOA Suite in the cloud. The GoldenGate Cloud Service is an implementation of the Oracle Golden Gate software that allows you to take data from different databases and synchronize the different repositories independent of the database vendor. The Internet of Things Cloud Service is the same listed in the Business Analytics section mentioned before.
The Cloud Management part of Platform services allows you to take the log files that you have inside your data center and analyze them for a variety of things. You can aggregate your log files into the Log Analytics Cloud Services to look for patterns, intrusion attempts, and problems or issues with services. The IT Analytics Cloud Service looks at log files and looks for trends like disks filling up, processors being used or not used appropriately. The Application Performance Cloud Service looks at log files to look at how systems and applications are operating rather than how systems are working rather than how components are working.
In Summary, we looked at an overview of the Platform as a Services offered by Oracle. Unfortunately, the variety of topics are too great for one blog. We did a high level overview of these services. In upcoming blogs we will dive deeper into each of these services and look at not only what they are but how they work and how to provision these services. We will also compare and contrast how these services compare to services offered by Amazon and Azure as we dive into each service.
New features in v 5.0:
1. Redesigned UI with ADF 12.2.1 Responsive template. UI is aligned with Alta UI best practices
2. Group by ECID functionality. We can track request action from top to bottom. This includes client request time (time needed to complete action on UI, including network traffic time). Executed SQL queries and any issues in ADF BC performance for the given request
3. ADF Click history logging. Click history data about UI performance is being intercepted and logged into Red Samurai DB, for analysis. This gives a database of all user requests and time for each request
4. User request statistics visualization dashboard
Here you can see sample data from ADF demo application. User request statistics are presented to help in understanding system performance. Time for each user request is visualized, along with detail information about request (component ID, name, type, etc.). Average times are presented, along with user statistics and top actions. We display total requests count vs. recent requests, to visualize the load on the system:
Each of the logged requests can be tracked down by ECID. Here we can see a list of VO's invoked during UI request and any slow performance behavior happening in ADF BC:
In the next updated v 6.0, we are going to implement UI behavior analysis, to extract most common UI usage patterns in ADF application.
With the all new features in SQL Server 2016 always on, which you can read up on here, it’s easy to forget about Transparent Data Encryption (TDE). This blog post will focus on TDE.
TDE encrypts database files at rest. What this means is your .MDF and .NDF Files, and consequently your backups, will be encrypted, meaning you will not be able to detach the database files and restore them on another server unless that server has the same certificate that was used to encrypt the database.
In this blog post I am using SQL Server 2014 and will explain how to enable TDE on an existing AG Group database
- The first thing we need to check is if the server has a master encryption key on all replica in the AG Group
USE MASTER GO SELECT * FROM sys.symmetric_keys WHERE name = '##MS_DatabaseMasterKey##'
The Screenshot below shows I don’t have a key so I need to create one
- Create a Database Master Encryption Key on each of the replicas in the AG Group. It is important to use a complex password
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'C&mpl£xP@$$Wrd' GO
- Run the code in step 1 and this time you should see the below
- Now we need to create a certificate to use for the encryption of the database on the primary replica. This can be accomplished by using the below
CREATE CERTIFICATE BackupEncryptionCert WITH SUBJECT = 'SQL Server 2014 AdventureWorks2012 Encryption Certificate'; GO
- Validate the Certificate
SELECT name, pvt_key_encryption_type_desc, thumbprint FROM sys.certificates
The thumbprint will be useful because when a database is encrypted, it will indicate the thumbprint of the certificate used to encrypt the Database Encryption Key. A single certificate can be used to encrypt more than one Database Encryption Key, but there can also be many certificates on a server, so the thumbprint will identify which server certificate is needed
- Next We need to backup the certificate on the Primary Replica
BACKUP CERTIFICATE BackupEncryptionCert
TO FILE = ‘C:\BackupCertificates\BackupEncryptionCert.bak’
WITH PRIVATE KEY ( FILE = ‘C:\BackupCertificates\BackupEncryptionCertKey.bak’ ,
ENCRYPTION BY PASSWORD = ‘Certi%yC&mpl£xP@$$Wrd’)
The BACKUP CERTIFICATE command will create two files. The first file is the server certificate itself. The second file is a “private key” file, protected by a password. Both files and the password will be used to restore the certificate onto other instances.
- The Files created in step 6 needs to be copied to each of the other replicas and created in SQL Server. After the files are copied the below command can be used to create the certificates
CREATE CERTIFICATE BackupEncryptionCert
FROM FILE = ‘C:\BackupCertificates\BackupEncryptionCert.bak’
WITH PRIVATE KEY (FILE = ‘C:\BackupCertificates\BackupEncryptionCertKey.bak’,
DECRYPTION BY PASSWORD = ‘Certi%yC&mpl£xP@$$Wrd’);
- That’s all the configuration needed for each instance now we are ready to start encrypting the database. We now need to tell SQL Server which Encryption Type we want to use and which certificate to use. This can be done using the following code on the Primary Replica
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE BackupEncryptionCert
- Finally, the last step is to enable TDE by executing the below command on the Primary Replica
ALTER DATABASE AdventureWorks2012 SET ENCRYPTION ON
And that’s it, I hope you enjoyed this tutorial and found it informative. If you have any questions, please comment below.