When the Smart Flash Cache was introduced in Exadata, it was caching reads only. So there were only read “optimization” statistics like cell flash cache read hits and physical read requests/bytes optimized in V$SESSTAT and V$SYSSTAT (the former accounted for the read IO requests that got its data from the flash cache and the latter ones accounted the disk IOs avoided both thanks to the flash cache and storage indexes). So if you wanted to measure the benefit of flash cache only, you’d have to use the cell flash cache read hits metric.
This all was fine until you enabled the Write-Back flash cache in a newer version of cellsrv. We still had only the “read hits” statistic in the V$ views! And when investigating it closer, both the read hits and write hits were accumulated in the same read hits statistic! (I can’t reproduce this on our patched 22.214.171.124 with latest cellsrv anymore, but it was definitely the behavior earlier, as I demoed it in various places).
Side-note: This is likely because it’s not so easy to just add more statistics to Oracle code within a single small patch. The statistic counters are referenced by other modules using macros with their direct numeric IDs (and memory offsets to v$sesstat array) and the IDs & addresses would change when more statistics get added. So, you can pretty much add new statistic counters only with new full patchsets, like 126.96.36.199. It’s the same with instance parameters by the way, that’s why the “spare” statistics and spare parameters exist, they’re placeholders for temporary use, until the new parameter or statistic gets added permanently with a full patchset update.
So, this is probably the reason why both the flash cache read and write hits got initially accumulated under the cell flash cache read hits statistic, but later on this seemed to get “fixed”, so that the read hits only showed read hits and the flash write hits were not accounted anywhere. You can test this easily by measuring your DBWR’s v$sesstat metrics with snapper for example, if you get way more cell flash cache read hits than physical read total IO requests, then you’re probably accumulating both read and write hits in the same metric.
Let’s look into a few different database versions:
SQL> @i USERNAME INST_NAME HOST_NAME SID SERIAL# VERSION STARTED -------------------- ------------ ------------------------- ----- -------- ---------- -------- SYS db12c1 enkdb03.enkitec.com 1497 20671 188.8.131.52.0 20131127 SQL> @sys cell%flash NAME VALUE ---------------------------------------------------------------- -------------------------- cell flash cache read hits 1874361
In the 184.108.40.206 database above, we still have only the read hits metric. But in the Oracle 220.127.116.11 output below, we finally have the flash cache IOs broken down by reads and writes, plus a few special metrics indicating if the block written to already existed in the flash cache (cell overwrites in flash cache) and when the block range written to flash was only partially cached in flash already when the DB issued the write (cell partial writes in flash cache):
SQL> @i USERNAME INST_NAME HOST_NAME SID SERIAL# VERSION STARTED -------------------- ------------ ------------------------- ----- -------- ---------- -------- SYS dbm012 enkdb02.enkitec.com 199 607 18.104.22.168.0 20131201 SQL> @sys cell%flash NAME VALUE ---------------------------------------------------------------- -------------------------- cell writes to flash cache 711439 cell overwrites in flash cache 696661 cell partial writes in flash cache 9 cell flash cache read hits 699240
So, this probably means that the upcoming Oracle 22.214.171.124 will have the flash cache write hit metrics in it too. So in the newer versions there’s no need to get creative when estimating the write-back flash cache hits in our performance scripts (the Exadata Snapper currently tries to derive this value from other metrics, relying on the bug where both read and write hits accumulated under the same metric, so I will need to update it based on the DB version we are running on).
So, when I look into one of the DBWR processes in a 126.96.36.199 DB on Exadata, I see the breakdown of flash read vs write hits:
SQL> @i USERNAME INST_NAME HOST_NAME SID SERIAL# VERSION STARTED -------------------- ------------ ------------------------- ----- -------- ---------- -------- SYS dbm012 enkdb02.enkitec.com 199 607 188.8.131.52.0 20131201 SQL> @exadata/cellver Show Exadata cell versions from V$CELL_CONFIG.... CELL_PATH CELL_NAME CELLSRV_VERSION FLASH_CACHE_MODE CPU_COUNT -------------------- -------------------- -------------------- -------------------- ---------- 192.168.12.3 enkcel01 184.108.40.206.1 WriteBack 16 192.168.12.4 enkcel02 220.127.116.11.1 WriteBack 16 192.168.12.5 enkcel03 18.104.22.168.1 WriteBack 16 SQL> @ses2 "select sid from v$session where program like '%DBW0%'" flash SID NAME VALUE ---------- ---------------------------------------------------------------- ---------- 296 cell writes to flash cache 50522 296 cell overwrites in flash cache 43998 296 cell flash cache read hits 36 SQL> @ses2 "select sid from v$session where program like '%DBW0%'" optimized SID NAME VALUE ---------- ---------------------------------------------------------------- ---------- 296 physical read requests optimized 36 296 physical read total bytes optimized 491520 296 physical write requests optimized 25565 296 physical write total bytes optimized 279920640
If you are wondering that why is the cell writes to flash cache metric roughly 2x bigger than the physical write requests optimized, it’s because of the ASM double mirroring we use. The physical writes metrics are counted at the database-scope IO layer (KSFD), but the ASM mirroring is done at a lower layer in the Oracle process codepath (KFIO). So when the DBWR issues a 1 MB write, v$sesstat metrics would record a 1 MB IO for it, but the ASM layer at the lower level would actually do 2 or 3x more IO due to double- or triple-mirroring. As the cell writes to flash cache metric is actually sent back from all storage cells involved in the actual (ASM-mirrored) write IOs, then we will see more around 2-3x storage flash write hits, than physical writes issued at the database level (depending on which mirroring level you use). Another way of saying this would be that the “physical writes” metrics are measured at higher level, “above” the ASM mirroring and the “flash hits” metrics are measured at a lower level, “below” the ASM mirroring in the IO stack.Related Posts
Since docker relies on cgroups and lxc, it should be easy with uek3. We provide official support for lxc, we are in fact a big contributor to the lxc project (shout out to Dwight Engen) and the docker website says that you need to be on 3.8 for it to just work. So, OL6.5 + UEK3 seems like the perfect combination to start out with.
Here are the steps to do few very simple things:
- Install Oracle Linux 6.5 (with the default UEK3 kernel (3.8.13))
- To quickly play with docker you can just use their example
(*) if you are behind a firewall, set your HTTP_PROXY
-> If you start from a Basic Oracle Linux 6.5 installation, install lxc first. Your out-of-the-box OL should be configured to access the public-yum repositories.
# yum install lxc
-> ensure you mount the cgroups fs
# mkdir -p /cgroup ; mount none -t cgroup /cgroup
-> grab the docker binary
# wget https://get.docker.io/builds/Linux/x86_64/docker-latest -O docker # chmod 755 docker
-> start the daemon
(*) again, if you are behind a firewall, set your HTTP_PROXY setting (http_proxy won't work with docker)
# ./docker -d &-> you can verify if it works
# ./docker version Client version: 0.7.0 Go version (client): go1.2rc5 Git commit (client): 0d078b6 Server version: 0.7.0 Git commit (server): 0d078b6 Go version (server): go1.2rc5
-> now you can try to download an example using ubuntu (we will have to get OL up there :))
# ./docker run -i -t ubuntu /bin/bash
this will go and pull in the ubuntu template and run bash inside
# ./docker run -i -t ubuntu /bin/bash WARNING: IPv4 forwarding is disabled. root@7ff7c2bae124:/#
and now I have a shell inside ubuntu!
-> ok so now on to playing with OL6. Let's create and import a small OL6 image.
-> first install febootstrap so that we can create an image
# yum install febootstrap
-> now you have to point to a place where you have the repoxml file and the packages on an http server. I copied my ISO content over to a place
I will install some basic packages in the subdirectory ol6 (it will create an OL installed image - this is based on what folks did for centos so it works the same (https://github.com/dotcloud/docker/blob/master/contrib/mkimage-centos.sh)
# febootstrap -i bash -i coreutils -i tar -i bzip2 -i gzip \ -i vim-minimal -i wget -i patch -i diffutils -i iproute -i yum ol6 ol6 http://wcoekaer-srv/ol/ # touch ol6/etc/resolv.conf # touch ol6/sbin/init
-> tar it up and import it
# tar --numeric-owner -jcpf ol6.tar.gz -C ol6 . # cat ol6.tar.gz | ./docker import - ol6
List the image
# ./docker images # ./docker images REPOSITORY TAG IMAGE ID CREATED SIZE ol6 latest d389ed8db59d 8 minutes ago 322.7 MB (virtual 322.7 MB) ubuntu 12.04 8dbd9e392a96 7 months ago 128 MB (virtual 128 MB)
And now I have a docker image with ol6 that I can play with!
# ./docker run -i -t ol6 ps aux WARNING: IPv4 forwarding is disabled. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 1.0 0.0 11264 656 ? R+ 23:58 0:00 ps aux
Way more to do but this all just worked out of the box!
# ./docker run ol6 /bin/echo hello world WARNING: IPv4 forwarding is disabled. hello world
That's it for now. Next time, I will try to create a mysql/ol6 image and various other things.
This really shows the power of containers on Linux and Linux itself. We have all these various Linux distributions but inside lxc (or docker) you can run ubuntu, debian, gentoo, yourowncustomcrazything and it will just run, old versions of OL, newer versions of OL, all on the same host kernel.
I can run OL6.5 and create OL4, OL5, OL6 containers or docker images but I can also run any old debian or slackware images at the same time.
I had the privilege to present with John Klinke of Oracle WebCenter Product Management during a recent webinar. John and I discussed the integrations that Fishbowl and Oracle provide for SharePoint, and instead of focusing on the feature/function of the integrations (connectors), we chose to detail the use cases that each of the integrations satisfy. It was important to each of us – not to mention our respective companies – that we took this approach as customers were asking what the differences were between the connectors. Before I summarize the use cases for the integrations, let me start with the underlying technical differences.
SharePoint Storage Options
With the release of SharePoint 2013, Microsoft still provides the ability to store content outside the SQL Server database. This is facilitated through remote blob storage or RBS, which effectively enables BLOBS (binary large objects) to be stored within 3rd-party storage systems. Storing BLOBS outside of SQL Server was useful in SharePoint 2007 and 2010, as the BLOBS ( Word documents, PowerPoint presentations, etc.) were causing overall SQL performance issues because queries to the database had to go through many BLOBs to return data requests. However, SharePoint 2013 features shredded storage, which basically saves versions of documents in small chunks that get reassembled when users access them. For example, a simple text edit to a Word document, say a change to the document’s footer, would result in only the incremental change being saved to the database and not the entire document. You don’t need to be a database expert to understand the positive performance impacts this would have.
Anyway, using RBS still has its advantages, and the obvious one is for those customers that are looking to integrate SharePoint with Oracle WebCenter Content. RBS provides a proven integration method to move SharePoint content and associated metadata to WebCenter Content for access, consumption and delivery to other Oracle-based systems. However, RBS is is basically an all or nothing approach. That is, wherever a RBS provider has been enabled, at the site collection, site or library level, ALL document versions in that location will be stored remotely. The only way to limit what gets stored is by file size or type. So, for organizations that do wish to store the majority of SharePoint content remotely, or in this case Oracle WebCenter Content, RBS is the way to go and this is the integration method that Oracle provides as of the WebCenter 22.214.171.124 release.
Customers looking for a more selective approach to store SharePoint content items will want to consider Fishbowl’s SharePoint connector integration for WebCenter. Fishbowl’s integration does not utilize RBS, and instead SharePoint event receivers are leveraged to determine document storage. This integration approach provides more granular control over content storage, while also giving SharePoint users specific control over the content items they want to store in WebCenter. The tradeoff with this more granular, user-controlled option is that duplicate items get stored between the systems.
With the technical details of each integration out of the way, let’s now talk use cases.
Use Cases for Fishbowl’s SharePoint Integration
- Content Publishing
The business scenario I discussed during the webinar was that of a SharePoint user on a marketing team working on a new product launch. During the lead up to the actual launch date, the user and their colleagues have created many assets to support the launch, including a brochure, new copy for the website, a launch plan, graphics and other images, and a press release. Most of these assets have multiple versions, and the user only wants to store or publish final versions of each so that they get surfaced to the company’s website.
Fishbowl’s SharePoint Connector for Oracle WebCenter Content features the ability to only store major versions of content in WebCenter. This allows users with specific knowledge of the content to publish the ability to do so, while also ensuring that only the final version of content gets stored before it can be seen internally or externally.
- Project Lifecycle Governance
This use case satisfies the requirement that many organizations have with their SharePoint system – deleting SharePoint libraries or whole sites at the conclusion of a project. The example I shared for this use case was that of members of a legal team working on a company acquisition. They have created and collected many documents to help with the acquisition, but once the acquisition is complete, the SharePoint library or site must be deleted to ensure the documents remain privy to the legal team and cannot be seen by anyone not authorized to do so.
For this use case, Fishbowl’s SharePoint Connector could be configured to allow content storage in WebCenter to occur via a checkbox. The description for the text box is configurable, but for example, it could simply stay “Store in WebCenter”. Such a text box allows a site arbiter on the legal team to determine the content that needs to be retained and stored in WebCenter. This could be content that needs to be retained per compliance or legal reasons, as well as content that needs to be shared with users outside the legal team such as members of the executive team.
- Business Specific Storage Requirements
For this use case the example I shared is an organization that has many, different requirements for the SharePoint content they wish to store in WebCenter. These requirements are driven by the various business units. For example, members of an organization’s financial team will have different retention requirements of content and will have to store the majority of the content they create per financial document retention rules. Contrast this with the Legal team example described above who do not want to store the majority of their content and want to be more selective. The feature to leverage for this use cases is the ability to override storage settings that are initially made at the SharePoint central admin level. This feature enables organizations to initially get their SharePoint to WebCenter integration up and running quickly, but puts the control of content storage in the hands of the business units that understand exactly the content they need to store in WebCenter for retention, distribution, and re-purposing.
Use Cases for Oracle’s SharePoint Integration
I will not try to fully detail the use cases that John did such a great job discussing during the webinar, so I will provide a summary instead. For a more detailed description, please watch the on-demand recording. John begins discussing Oracle’s use cases at about minute 43.
- Improve Performance
John spoke to the advantages of storing BLOBs outside of SQL server, which would help improve overall system performance. With Oracle’s connector leveraging RBS, it is very easy for organizations to centralize all SharePoint content to WebCenter and leverage the Oracle database to scale to trillions of items.
- Improve Governance
For this use case, John spoke to how a lot of companies using SharePoint have struggled with governance of the system. Sites and overall use quickly spirals out of control leaving IT to clean up the mess of orphaned sites and content. By centrally managing this content in WebCenter, organizations can leverage the records and retention management policies they have in place to better manage content.
- Re-Use Content
The point John made with this use case is that by centralizing SharePoint content in WebCenter, that content can then be re-used or surfaced to other Oracle-based systems and applications – WebCenter Portal, WebCenter Sites, E-Business Suite, etc. Companies can leverage Oracle WebCenter’s out-of-the-box integrations for this purpose. The big benefit here is getting rid of SharePoint silos, and providing users access to high-value content outside of SharePoint.
Use Case Summaries
Well, there you have it. Integrating SharePoint and Oracle WebCenter Content can be achieved via the integrations that Fishbowl and Oracle provide. As you consider such an integration, please first consider your integration use case and ultimately what your organization is trying to achieve. Here is a table that summarizes and compares use cases for each integration:
You can access and watch the webinar recording from Fishbowl’s YouTube Channel. Enjoy, and please pass along any feedback.
Thanks for reading!
Jason Lamon is a product strategist and technology evangelist who writes about a range of topics regarding content management and enterprise portals. He writes to keep the communication going about such topics, uncover new opinions, and to get responses from people who are smarter than him. If you are one of those people, feel free to respond to his tweets and posts.
The post SharePoint and Oracle WebCenter: Use Cases for an Integrated Content Management System appeared first on C4 Blog by Fishbowl Solutions.
The development of the cloud has changed the way IT managers handle corporate infrastructures. The integration of scaling storage and remote access features have made it easier for organizations to save on operational expenses by providing increased flexible memory requirements that are not possible with legacy equipment. As more enterprises shift away from on-premises storage, the growth of traditional methods is fading, reported InfoWorld.
The IT paradigm shift
The popularity of the cloud's innovative and customizable approach to enterprise infrastructure has made it easier for IT managers to create digital architectures that more directly cater to the tech needs of different companies. Although cloud services are still considered an emerging market, the enormous foundation they have created in the corporate world has already impacted the way companies are upgrading their storage systems. According to the news provider, the rising demand in cloud-based solutions have reduced the market for legacy equipment.
This rapid evolution of an enterprise strategy is relatively atypical, noted the source. However, there are a few reasons why the rise of the cloud might have been predictable. Its fluid, on-the-fly adjustment speeds make it ideal for big data organizations that experience regular fluctuations in the amount of information they process throughout the fiscal year. Additionally, the cloud's customizable environment enables IT managers to build their own interfaces from scratch or leverage cloud-based applications that can be tailored to construct unique, needs-based architectures.
Database administration, for example, connects the organization's cloud to a team of remote DBA experts who will help decision-makers guide the categorization of information. Unlike traditional methods of data storage, which typically require a more hands-on approach, the cloud allows IT teams to avert their focus from the full-time duties of data maintenance so they can spend more time addressing the needs of the company.
According to Dark Reading, data analytics on the cloud is becoming even easier as providers continue to simplify the way that information is managed across these digital infrastructures.
Although every cloud developer is unique, businesses considering a transition should spend time clarifying their corporate goals so that the best options can be deployed.
RDX offers a full suite of cloud migration and administrative services that can be tailored to meet any customer's needs. To learn more about our full suite of cloud migration and support services, please visit our Cloud DBA Service page or contact us.
And here we go:
If you follow the link you will get the following information:
- KitKat 4.4: Smart, simple, and truly yours
- and several other enhancements of version 4.4
For a complete history of all updates visit this posting.
Imagine for a second that you come from Brazil and are currently working in Angola. Would you be taking a trip to Manchester to attend UKOUG Tech13? That’s what Alex Zaballa did.
If there was an award for, “Most Committed UKOUG Tech13 Attendee”, he’s got to be in with a shot at it.
Tim…Is this the most committed UKOUG Tech13 attendee? was first posted on December 3, 2013 at 6:43 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Today I attended the Middleware Weblogic stream
It was very interested to learn a few about Oracle view and strategy around Java and Middle-tiers layer.
On the perspective to continue to provide cloud management tools and products, Oracle wants achieve stronger integration between Weblogic and Databases, both in 12c versions about following features:
- Application availability with Transaction Gard
- Multitenant Database integration
- Database Connection Pool
- Global Data Services
Oracle also made a introduction about Oracle Coherence. It is fact It is an intermediate layer between middleware tiers and several kind of data sources. Its main functionalities are:
- Data caching
- Synchronous and asynchronous backups
- Multi-machine loss protection
- Multi data center
Current Weblogic release supports Java EE 6, however support for Java EE 7 will coming during years 2014/2015
Now let me talk about the update and roadmap of Oracle Cloud Services. Goal is to ease Java based applications deployments, by providing Instant accesses and self service into a pre-configured environment.
There are 2 branches:
- Java cloud services: Dedicated platform for Fusion Middleware SaaS extensions
- Java as a service: For every Java EE an legacy applications
In this case Oracle aims to provide hosting services around Weblogic Java application containers. There are three levels of support:
- Basic: Pre-configured, automatically installed Weblogic software.
- Managed: Oracle manages one or more Weblogic domains
- Maximum availability: Managed HA environment, Weblogic cluster integrated with RAC
I asked the question, but answer was not clear if using such services, administrators can keep a direct access into the underlaying Operating System. It seems to be the case for some few uses cases, but obviously, general purpose is to definitely not deal with other software brick than Java and fusion Middleware products. They invited us to have a look at the documentation about this topic.
To ease and speed up benefit interest in their product, Oracle provides a Java Cloud Service SDK plug-in supported by all mainstream Java development platforms (JSeveloper, Eclipse and Netbeans)
Oracle also planned to provide same kind of high level cloud services regarding the development processes. They aimed to integrate development specific tools such as Maven, Hudson, bug tracking, etc.., in a dedicated cloud space set-up for Java application.
To summarize, the impression I had after having attended today's presentations, is that Oracle's strategy for the future is to try inviting the Java actors within the market to develop new applications on their infrastructure. And once finished, also to host the results by their own.
In the mean time, Oracle ensured they will continue to support Java Community, around Netbeans and Glassfish for instance, even if they have stopped the commercial support for the last.
Today the first session I assisted was animated by Ton Kyte talking about SQL topics we sure do not know.
The first topic was concerning the possibility of using the plsql_warning clause with the PL/SQL compiler. This feature exists since version 10.1 but is not widely used. The different values you can define for the plsql_warning with a classical alter session command are:
Severe: code might cause unexpected errors or wrong results
Performance: code might cause performance issues
Informational: code is not wrong but this is just bad code
Thus by using this command before compiling the PL/SQL code you will receive messages helping you to improve your code.
The second hint was about implicit conversion. For example the implicit conversion from string to number or to date is probably the first cause of bugs or performance degradation. It could be even worse with the implicit conversion that rely on NLS default settings.
If we had used the PL/SQL warnings, those implicit conversion would have been told out.
One interesting hint was concerning the different levels of the optimizing compiler. You can set the PLSQL_OPTIMIZER level to the following values:
1: no rearranging code
2: rearranging code is possible
3: code is rearranging
He showed us some very interesting examples which showed us that the level 3 might be very aggressive and modify the results.
In resume the PL/SQL warning framework is important and very useful; the last level of the PLSQL_OPTIMISER must be used with caution.
My second slot was about Oracle Active Data Guard, the Next Generation Data Protection with Oracle 12c presented by Mr. Larry Carpenter, Master Oracle Product Manager at Oracle USA.
He presented us the following new features available:
The new Data guard fast sync feature is reducing the synchronous transport impact. In maximum availability protection mode, the standby database acknowledges the receipt of redo before writing to standby redo logs. The main benefit is that it reduces the synchronous performance impact on the primary database, and consequently the primary database performances are more predictable.
He also announced that DML on global temporary tables are now supported in active data guard 12c. It is enabled by the new parameter TEMP_UNDO_ENABLED.
The data guard broker has also evaluated in terms of monitoring, robustness, log monitoring and warnings, validating role transition readiness. For example the Transport lag and Apply lag is included now in the show database output. The validate command can be used to assess readiness to a switchover or a failover, it validates each database current status , verifies there is no archive lag gap, detects parameters property inconsistency and also performs a log switch on the primary database to verify the log is applied on all standbys.
The Rolling Upgrade has also been centralized and simplified and uses a dedicated interface by using dbms_rolling package which will greatly simplify the rolling upgrade in comparison to the previous 11g release by using only five phases:
dbms_rolling.init_plan: generates an upgrade plan
dbms_rolling.set_parameters: modifies the parameters of the rolling upgrade
dbms_rolling.start_plan: configures primary and standby database for the upgrade
dbms_rolling.switchover: swaps roles between the primary and the standby, this is the only downtime
dbms_rolling.finish_plan: completes upgrade and resynchronizes
Finally I assisted to an EM12c session: Using BI publisher with EM12c for enhanced productivity animated by Mr. Dananjay Papde author of Oracle EM12c administration cookbook.
BI publisher is the oracle’s primary reporting tool for authoring managing and delivering documents, highly formatted and published in various formats sucha as PDF, Word, Excel, Power Point …
To use Bi publisher with EM 12c you have to install and integrate it with EM 12c in two phases:
1: software only install of BI version 126.96.36.199.0
2: integrate BI publisher with EM12c
Note that in the next version of OEM 12c (188.8.131.52.0) you won’t be obliged to install and configure BI publisher, it will be integrated directly in OMS.
Before running the configuration it is recommended to take a backup of the repository and the EM domain, then you only have to run the configureBIP script located in the $OMS_HOME/BIP/bin directory.
Creating BI Publisher reports can be done in an easy way, you select the EM repository as data source and you can use the query builder to build your queries from mgmt$ views.
Another interesting point is that you can efficiently manage your BI publisher from the EM 12c console. By accessing the BI Farm summary you have access to a lot of metrics which might help you to correctly administer your BI publisher.
I also participate at an interactive and open questions and answers session about Oracle Enterprise Manager 12c. The session was really interactive, at the beginning Oracle managers asked classical questions about what kind of targets we used to administer with EM12c, if we used other products then EM12c, then the problematic of the licenses has been discussed, in particular the very complicated usage of the different pack licensing methods for the different targets administered by EM12c. It was a pleasure to share this session with different customers and Oracle managers.
Interactive Quick Reference: Oracle Database 12c This Interactive Quick Referenceis a multimedia tool that presents terms and concepts used in the Oracle Database 12crelease. Built as a multimedia web page, this diagram provides descriptions of database architectural components with references to relevant documentation. The perfect cheat sheet for writing custom data dictionary scripts, locating views pertinent to a specific database component or category, or understanding the overall database architecture.
As the cloud market advances, more companies are switching to cloud-based infrastructures to manage their corporate IT. Because of its scalable storage capabilities and its remote access features, cloud services provide decision-makers with increased mobility, enhanced customization and the ability to change their strategies more readily than with traditional architectures.
Enhanced security essential for successful cloud deployments
As with all new technologies, however, discerning the best way to secure the company's network after deploying the cloud is the next step. According to Rajiv Gupta, contributing writer to USA Today, the rapid growth of the cloud has led to an increase in the amount of data that is outsourced to a third-party organization. As a result, the source noted that the expansion of the cloud might have occurred faster than cloud developers can create lasting solutions to the services' weak cybersecurity.
Information breaches can be prevented, however. Although cloud providers leverage their own high-quality data maintenance strategies, cybercriminals are learning how to circumvent these protections, which weakens the cloud market overall. In an effort to thwart hackers, the source reported that layers of defense are key in preparing a safe cloud-based environment.
Encryption, for instance, should be the first layer of defense that is applied to data before it is sent to the cloud. With this strategy, sophisticated algorithms scramble information so that if stolen or compromised in any way, the information is rendered useless unless the hacker also retrieves the decryption key. Cloud-based applications, such as database administration, can also be utilized to provide a team of remote DBA experts who will help companies maintain the integrity of their information.
Learn how the cloud provider manages security
SC Magazine noted that decision-makers should make a conscious effort to learn how their potential cloud providers handle data security. As every cloud is unique, the services and information maintenance strategies often differ between cloud developers, and it's important for IT managers to be aware of how their data will be protected. According to the source, the evolution of the cloud is moving toward a more centralized version of security, focusing on innovative and unique approaches to safeguarding information after it has already been deployed.
Decision-makers who are considering a transition to the cloud should be conscious of the cloud's features and determine how it can help the company stretch the bottom line.
Big Data Analytics From Strategic Planning to Enterprise Integration with Tools, Techniques, NoSQL, and Graph
What is Big Data Analytics?
Big Data Analytics From Strategic Planning to Enterprise Integration with Tools, Techniques, NoSQL, and Graph By David Loshin. This book helps readers to understand about Big Data is, why it can add value, what types of
problems are suited to a big data approach, and how to properly plan
to determine the need, align the right people in the organization, and
develop a strategic plan for integration.
It has 11 chapters. It will good, if readers can read some chapters online... Anyway, readers will see:
Chapter 1: We consider the market conditions that have enabled broad acceptance of big data analytics, including commoditization of hardware and software, increased data volumes, growing variation in types of data assets for analysis, different methods for data delivery, and increased expectations for real-time integration of analytical results into operational processes.
Chapter 2: In this chapter, we look at the characteristics of business problems that traditionally have required resources that exceeded the enterprises’ scopes, yet are suited to solutions that can take advantage of the big data platforms (either dedicated hardware or virtualized/cloud based).
Chapter 3: Who in the organization needs to be involved in the process of acquiring, proving, and deploying big data solutions? And what are their roles and responsibilities? This chapter looks at the adoption of new technology and how the organization must align to integrate into the system development life cycle.
Chapter 4: This chapter expands on the previous one by looking at some key issues that often plague new technology adoption and show that the key issues are not new ones and that there is likely to be organizational knowledge that can help in fleshing out a reasonable strategic plan.
Chapter 5: In this chapter, we look at the need for oversight and governance for the data, especially when those developing big data applications often bypass traditional IT and data management channels.
Chapter 6: In this chapter, we look at specialty-hardware designed for analytics and how they are engineered to accommodate large data sets.
Chapter 7: This chapter discusses and provides a high-level overview of tool suites such as Hadoop.
Chapter 8: This chapter examines the MapReduce programming model.
Chapter 9: In this chapter, we look at a variety of alternative methods of data management methods that are being adopted for big data application development.
Chapter 10: This chapter looks at business problems suited for graph analytics, what differentiates the problems from traditional approaches and considerations for discovery versus search analyses.
Chapter 11: This short final chapter reviews best practices for incrementally adopting big data into the enterprise. In a book, it assists readers about Big Data. It still gives readers about exercises in each chapter, that will help readers think and imagine about big picture for each topic and then able to use idea/knowledge what they read in their work.Written By: Surachart Opun http://surachartopun.com
Whenever you change the DelayMins setting in Dataguard, you must remember it affects only logs that have not been shipped yet.
DGMGRL> show database sDB01 delaymins DelayMins = '5' DGMGRL> edit DATABASE sDB01 set property delaymins=2; Property "delaymins" updated ARC3: Archive log thread 1 sequence 3199 available in 5 minute(s) Tue Dec 03 15:34:59 2013 ARC0: Archive log thread 1 sequence 3200 available in 2 minute(s) Tue Dec 03 15:35:15 2013 SQL> select sysdate, SEQUENCE# from v$managed_standby where process='MRP0' SYSDATE SEQUENCE# ------------------- ---------- 2013-12-03_15:38:00 3199
The old logs are not affected. Let’s wait until the latest Delay=5 got applied.
Tue Dec 03 15:40:02 2013 Media Recovery Log /u01/app/oracle/admin/DB01/arch/DB01_1_3199_827686279.arc Media Recovery Log /u01/app/oracle/admin/DB01/arch/DB01_1_3200_827686279.arc Media Recovery Log /u01/app/oracle/admin/DB01/arch/DB01_1_3201_827686279.arc Media Recovery Log /u01/app/oracle/admin/DB01/arch/DB01_1_3202_827686279.arc Media Recovery Log /u01/app/oracle/admin/DB01/arch/DB01_1_3203_827686279.arc Media Recovery Log /u01/app/oracle/admin/DB01/arch/DB01_1_3204_827686279.arc Media Recovery Log /u01/app/oracle/admin/DB01/arch/DB01_1_3205_827686279.arc
All files which had a delay=2 were “pending” apply. Now we got the apply=2 behavior
Same if you increase the value
DGMGRL> edit DATABASE sDB01 set property delaymins=30; Property "delaymins" updated SQL> select sysdate, SEQUENCE# from v$managed_standby where process='MRP0'; SYSDATE SEQUENCE# ------------------- ---------- 2013-12-03_15:49:04 3224 ARC3: Archive log thread 1 sequence 3224 available in 2 minute(s) Tue Dec 03 15:47:22 2013
Here again, the old logs are not affected, we need to wait until the last delay=2 got applied to get a delay=30 behavior.
While you cannot change the delay, there is still a way to workaround the problem.
If you want to immediately increase log to 30 minutes, turn off applying for half an hour.
DGMGRL> edit DATABASE sDB01 set state='APPLY-OFF'; Succeeded. -- coffee break DGMGRL> edit DATABASE sDB01 set state='APPLY-ON'; Succeeded.
If you want to decrease log from 30 to 2 minutes right now and immediately apply the old logs which have reached this threshold, use sqlplus
ARC1: Archive log thread 1 sequence 3253 available in 30 minute(s) Tue Dec 03 16:01:26 2013 ARC3: Archive log thread 1 sequence 3254 available in 2 minute(s) Tue Dec 03 16:01:37 2013 DGMGRL> edit DATABASE sDB01 set state='APPLY-OFF'; Succeeded. SQL> recover automatic standby database until time '2013-12-03_16:01:30'; Media recovery complete. DGMGRL> edit DATABASE sDB01 set state='APPLY-ON'; Succeeded.
I wrote on delay standby failover here : here
Another full day in Manchester, I choose to follow a stream a little bit different in the morning. So it will not be all about the optimizer today but more about O.S and virtualization.
Assessment Tool Measures
Your Self-Service Portal Strategy
The adoption of self-service portals is accelerating, but too often portals are implemented without a comprehensive strategy that ensures key functionality, such as cohesive user experiences and integration with back-end applications. Now a new assessment tool lets organizations quickly measure the effectiveness of their current self-service portal approach.
Recent studies indicate that self-service portals are now more of a business imperative than ever. One study shows that 55 percent of consumers prefer automated self-service—twice as many as five years ago. The potential benefits are enormous: self-service reduces costs by US$9.00 per employee per month by eliminating manual, paper-intensive processes. View a new infographic to learn about other interesting facets of self-service.
Are You a Self-Service Leader?
Despite the potential to use portals to reduce costs and increase satisfaction, many organizations install packaged applications that focus only on one aspect of a relationship with a customer, employee, or partner. For example, a customer relationship management system may present sales projection information but fail to integrate with the enterprise resource planning system to provide current account status.
"Too often, such an approach doesn't address every aspect of taking action," explains Ashish Agrawal, senior director of product management for Oracle WebCenter Portal.
The Oracle Self-Service Portal User Experience Assessment quickly helps you benchmark your organization's approach, including
- Portal strategy
- Portal agility
- Integration and personalization
- Social interactivity
- Cross-channel integration
Why Oracle WebCenter?
Oracle is the only enterprise software vendor with the complete, open, and integrated approach to support a truly comprehensive self-service portal strategy. With Oracle WebCenter Portal, you can
- Integrate content, information, and business processes
- Easily create and manage mobile portal experiences for desktop browsers, smartphones, tablets, and kiosks
- Build and deploy custom-built components using rich development tools
View the new self-service portal destination page and take the Oracle Self-Service Portal User Experience Assessment.
Access in-depth information about Oracle WebCenter Portal.
Download a white paper about self-service portals, “Oracle WebCenter Portal: High Value Web Experiences Through Self-Service Portals.”
There's more of this great content in this November's WebCenter NewsletterSubscribe for more content every other month.
We have prepared special app for this session, separating each use case into different module. You can download complete sample application - DangerousApp.zip.
You can check uploaded slides for more details, below I will describe in addition every point presented. All the code listed is part of sample application, you can download it and check directly.
1. Batches Of and Slow Query
I believe you had experienced such situation, when SQL query was executed fast in SQL Developer, but slow in ADF. You should remember - ADF is executed from the server and there is additional roundtrip to bring data from DB. Also it depends how many rows are fetched in result set from DB to the server. By default, Batches Of property is set to 1:
We recommend to increase it for better performance and communication between WebLogic server and DB - this would allow to bring more rows in less roundtrips. There is a way to do this task in generic way, programatically - check the source code of sample application:
If you want to track time taken to execute VO, you could do this from the same executeQueryForCollection method - simply get start, end times and log:
2. Large Fetch
You can monitor row fetches performed in ADF with standard ADF BC method - createInstanceFromResultSet:
We would recommend to disable full table scroll by setting RowCountThreshold property for iterator in Page Definition to -1. Read more about this here: How To Disable SELECT COUNT Execution for ADF Table Rendering:
Make sure -1 is not set for LOV ListRangeSize property, otherwise it will be fetching all rows from DB, when LOV popup will be opened. Read more about it here: Fix Rowset is Forward Only Error for ADF BC LOV Range Paging (184.108.40.206.0):
3. Groovy Misuse
Keep in mind - when using Groovy function, it calls basic SQL statement, fetches records into memory and performs intended function. This would mean, it could fetch lots of rows into memory and only later produce requested result. Instead, you should consider using optimised SQL query directly for better performance, without using Groovy function.
Here is Groovy sum function example:
From the log you could see basic SQL statement executed and all rows fetched (2 rows in this example, it will be much more in real life scenario):
Another possible slow performance case with Groovy - calling Java method from Groovy attribute value expression. Especially if this Java method in turn is calling VO to fetch data from DB. The problem is related to numerous invocation of the same method by Groovy expression defined for attribute value. Here is example of such Groovy expression calling Java method from attribute value:
Java method is basically calling VO and executing it:
From the log we can see - VO was invoked and SQL executed at least two times in this example:
When rendering ADF table, by default two requests are made. Between these requests, passivation/activation may happen and this going to slow down application performance. We could minimise this up to one request by setting ContentDelivery=immediate for ADF table, to prevent unnecessary passivation/activation. You can read more about it in this post: Immediate Effect for ADF Table Content Delivery:
ORDER BY and large fetch related issue was described in addition, it is document in this post: Reproducing WebLogic Stuck Threads with ADF CreateInsert Operation and ORDER BY Clause
5. ADF Query Misuse
Make sure not mix-up Bind Variables in VO, while using them from View Criteria. In this case I demo wrong scenario - Bind Variable is defined as Required, Where type:
This Bind Variable is used later from View Criteria. For Required, Where type Bind Variables, JDeveloper allows to choose Ignore Null Values option. Eventually this will break DB index usage, as IS NULL will be applied for the column:
We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]
Posted by Pete On 29/10/13 At 01:05 PM
We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]
Posted by Pete On 18/10/13 At 02:36 PM
We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]
Posted by Pete On 04/09/13 At 02:45 PM