Feed aggregator

Oracle Security Training In York

Pete Finnigan - Wed, 2016-05-18 16:35

We ran a five day Oracle Security training event in York, England from September 21st to September 25th at the Holiday Inn hotel. This proved to be very successful and good fun. The event included back to back teaching by....[Read More]

Posted by Pete On 22/10/15 At 08:49 PM

Categories: Security Blogs

New Presentation - Building Practical Oracle Audit Trails

Pete Finnigan - Wed, 2016-05-18 16:35

I wrote a presentation on designing and building practical audit trails back in 2012 and presented it once and then never again. By chance I did not post the pdf's of these slides at that time. I did though some....[Read More]

Posted by Pete On 01/10/15 At 05:16 PM

Categories: Security Blogs

Protect Your APEX Application PL/SQL Source Code

Pete Finnigan - Wed, 2016-05-18 16:35

Oracle Application Express is a great rapid application development tool where you can write your applications functionality in PL/SQL and create the interface easily in the APEX UI using all of the tools available to create forms and reports and....[Read More]

Posted by Pete On 21/07/15 At 04:27 PM

Categories: Security Blogs

Vanderbilt University Promotes Oracle HCM, ERP, and EPM Clouds

Linda Fishman Hoyle - Wed, 2016-05-18 16:14

It’s great when we get to showcase our customers. It is amazing when great customers showcase us!

And that’s exactly what happens in this new video from Vanderbilt University.

The video was created by Vanderbilt, not Oracle, to help launch the university’s SkyVU initiative. SkyVU is an Oracle ERP, HCM, and EPM cloud-based solution that replaces approximately 15 current e-business services and provides a modern and synchronized environment “to allow university faculty and staff to spend less time on paperwork and more time on efforts that contribute to the university’s mission.” The video, a launch event, and website (under construction) are part of the rollout to key stakeholders and users.

The SkyVU video is perfectly aligned with Oracle’s message under the tagline “Modern Demands Need Modern Systems.” It presents thoughts from senior executives from across Vanderbilt’s HR, Finance, IT, and academic leadership team that will resonate across industries.

You might not be in Higher Ed, but don't let that stop you from watching the video. Vanderbilt is the second largest private employer in the state of Tennessee.

Log Buffer #474: A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2016-05-18 15:46

This Log Buffer Edition covers Oracle, SQL Server and MySQL blogs from across the planet.

Oracle:

You might be thinking “Nothing is more frustrating that encountering a string column that is full of dates”. But there is something worse than that.

Unique constraint WWV_FLOW_WORKSHEET_RPTS_UK violated.

Understanding query slowness after platform change

Database Migration and Integration using AWS DMS

Oracle BPM 12c: Browsing the SOAINFRA

SQL Server:

Adding PK Exceptions to SQLCop Tests

Is a RID Lookup faster than a Key Lookup?

Performance Surprises and Assumptions : DATEADD()

Generate INSERT scripts from SQL Server queries and stored procedure output

PowerShell Desired State Configuration: Pull Mode

Continuous Delivery from the 19th Century to TODAY

MySQL:

ProxySQL versus MaxScale for OLTP RO workloads

Properly removing users in MySQL

MySQL/MariaDB cursors and temp tables

Quick start MySQL testing using Docker (on a Mac!)

Query Rewrite plugin can harm performance

Categories: DBA Blogs

I’m having issues with comment spam

DBMS2 - Wed, 2016-05-18 15:12

My blogs are having a bad time with comment spam. While Akismet and other safeguards are intercepting almost all of the ~5000 attempted spam comments per day, the small fraction that get through are still a large absolute number to deal with.

There’s some danger I’ll need to restrict comments here to combat it. (At the moment they’ve been turned off almost entirely on Text Technologies, which may be awkward if I want to put a post up there rather than here.) If I do, I’ll say so in a separate post. I apologize in advance for any inconvenience.

Categories: Other

Under the Covers of OBIEE 12c Configuration with sysdig

Rittman Mead Consulting - Wed, 2016-05-18 13:50
Under the Covers of OBIEE 12c Configuration with sysdig

OBIEE 12c has changed quite a lot in how it manages configuration. In OBIEE 11g configuration was based around system MBeans and the biee-domain.xml as the master copy of settings - and if you updated a configuration directly that was centrally managed, it would get reverted back. Now in OBIEE 12c configuration can be managed directly in text files again - but also through EM still (not to mention WLST). Confused? Yep, I was.

In the configuration files such as NQSConfig.INI there are settings still marked with the ominous comment:

# This Configuration setting is managed by Oracle Enterprise Manager Fusion Middleware Control 

In 11g this meant - dragons be here; turn back all ye who don't want to have your configuration settings wiped next time the stack boots.

Now in 12c, I can make a configuration change (such as enabling BI Server caching), restart the affected component, and the change will take affect -- and persist through a restart of the whole OBIEE stack. All good.

Under the Covers of OBIEE 12c Configuration with sysdig
But ... the fly in the ointment. If I restart just the affected component (for example, BI Server for an NQSConfig.INI change), since I don't want to waste time bouncing the whole stack if I don't need to, then Enterprise Manager will continue to show the old setting:

Under the Covers of OBIEE 12c Configuration with sysdig

So even though in fact the cache is enabled (and I can see entries being populated in it), Enterprise Manager suggests that it's not. Confusing.

So ... if we're going to edit configuration files by hand (and personally I prefer to, since it saves firing up a web browser), we need to know how to make sure Enterprise Manager will to reflect the change too. Does EM poll the file whilst running? Or something direct to each component to request the configuration? Or maybe it just reads the file on startup only?

Enter sysdig! What I'm about to use it for is pretty darn trivial (and could probably be done with other standard *nix tools), but is still a useful example. What we want to know is which process reads NQSConfig.INI, and from there isolate the particular component that we need to restart to get it to trigger a re-read of the file and thus correctly show the value in Enterprise Manager.

I ran sysdig with a filter for filename and custom output format to include the process PID:

sudo sysdig -A -p "%evt.num %evt.time %evt.cpu %proc.name (%proc.pid) %evt.dir %evt.info" "fd.filename=NQSConfig.INI and evt.type=open"

Nothing was written (i.e. nothing was polling the file), until I bounced the full OBIEE stack ($DOMAINHOME/bitools/bin/stop.sh && $DOMAINHOME/bitools/bin/start.sh). During the startup of the AdminServer, sysdig showed:


32222110 12:00:49.912132008 3 java (10409) < fd=874(/app/oracle/biee/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI) name=/app/oracle/biee/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI flags=1(O_RDONLY) mode=0

So - it's the java process that reads it, PID 10409. Which is that?

$ ps -ef|grep 10409
oracle   10409 10358 99 11:59 ?        00:03:54 /usr/java/jdk1.8.0_51/bin/java -server -Xms512m -Xmx1024m -Dweblogic.Name=AdminServer [...]

It's AdminServer -- which makes sense, because Enterprise Manager is a java deployment hosted in AdminServer.

So, if you want to hack the config files by hand, restart either the whole OBIEE stack, or the affected component plus AdminServer in order for Enterprise Manager to pick up the change.

Categories: BI & Warehousing

Under the Covers of OBIEE 12c Configuration with sysdig

Rittman Mead Consulting - Wed, 2016-05-18 11:57

OBIEE 12c has changed quite a lot in how it manages configuration. In OBIEE 11g configuration was based around system MBeans and the biee-domain.xml as the master copy of settings – and if you updated a configuration directly that was centrally managed, it would get reverted back. Now in OBIEE 12c configuration can be managed directly in text files again – but also through EM still (not to mention WLST). Confused? Yep, I was.

In the configuration files such as NQSConfig.INI there are settings still marked with the ominous comment:

# This Configuration setting is managed by Oracle Enterprise Manager Fusion Middleware Control

In 11g this meant – dragons be here; turn back all ye who don’t want to have your configuration settings wiped next time the stack boots.

Now in 12c, I can make a configuration change (such as enabling BI Server caching), restart the affected component, and the change will take affect — and persist through a restart of the whole OBIEE stack. All good.

1__oracle_demo____ssh__and_training-material-obiee__Git_
But … the fly in the ointment. If I restart just the affected component (for example, BI Server for an NQSConfig.INI change), since I don’t want to waste time bouncing the whole stack if I don’t need to, then Enterprise Manager will continue to show the old setting:

54_170_157_117

So even though in fact the cache is enabled (and I can see entries being populated in it), Enterprise Manager suggests that it’s not. Confusing.

So … if we’re going to edit configuration files by hand (and personally I prefer to, since it saves firing up a web browser), we need to know how to make sure Enterprise Manager will to reflect the change too. Does EM poll the file whilst running? Or something direct to each component to request the configuration? Or maybe it just reads the file on startup only?

Enter sysdig! What I’m about to use it for is pretty darn trivial (and could probably be done with other standard *nix tools), but is still a useful example. What we want to know is which process reads NQSConfig.INI, and from there isolate the particular component that we need to restart to get it to trigger a re-read of the file and thus correctly show the value in Enterprise Manager.

I ran sysdig with a filter for filename and custom output format to include the process PID:

sudo sysdig -A -p "%evt.num %evt.time %evt.cpu %proc.name (%proc.pid) %evt.dir %evt.info" "fd.filename=NQSConfig.INI and evt.type=open"

Nothing was written (i.e. nothing was polling the file), until I bounced the full OBIEE stack ($DOMAIN_HOME/bitools/bin/stop.sh && $DOMAIN_HOME/bitools/bin/start.sh). During the startup of the AdminServer, sysdig showed:

32222110 12:00:49.912132008 3 java (10409) < fd=874(<f>/app/oracle/biee/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI) name=/app/oracle/biee/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI flags=1(O_RDONLY) mode=0

So – it’s the java process that reads it, PID 10409. Which is that?

$ ps -ef|grep 10409
oracle   10409 10358 99 11:59 ?        00:03:54 /usr/java/jdk1.8.0_51/bin/java -server -Xms512m -Xmx1024m -Dweblogic.Name=AdminServer [...]

It’s AdminServer — which makes sense, because Enterprise Manager is a java deployment hosted in AdminServer.

So, if you want to hack the config files by hand, restart either the whole OBIEE stack, or the affected component plus AdminServer in order for Enterprise Manager to pick up the change.

The post Under the Covers of OBIEE 12c Configuration with sysdig appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Resize filesystem

Jeff Moss - Wed, 2016-05-18 11:16

Join Pythian at the 2016 Cloud Computing Expo

Pythian Group - Wed, 2016-05-18 10:46

 

It won’t be too long before we’re touching down in New York City to attend another Cloud Computing Expo, June 7-9 at the Javits Centre.

If you plan on attending the show, be sure to stop by Pythian’s booth (#424) to chat with our technical experts about your cloud strategy development and to find how Pythian is currently helping companies maximize performance and reduce costs when moving to the cloud.

Our experts will also be presenting on a wide range of insightful cloud topics, which you won’t want to miss.

 

Our Speakers:

Chris Presley (Solutions Architect) will be speaking on the factors that you should consider when moving a database onto a public cloud.

Alex Lovell-Troy (Director, DevOps) will be guiding listeners from Configuration Management to Cloud Orchestration.

Warner Chaves (Principal Consultant) will be providing a tour of Data Platforms as a Service.

 

Interested in meeting up at the show? Drop us a line, we’d love to connect.

We’re looking forward to seeing you there!

Categories: DBA Blogs

Partition Storage -- 8 : Manually Sizing Partitions

Hemant K Chitale - Wed, 2016-05-18 10:44
As demonstrated in the first post in this series, the  default size for Table Partitions is to start with an I nitial8MB Extent.  The fourth post demonstrated how to resize a Partition that has already been created and populated.

Here is a method to pre-size the Initial Extent. I start with 3 partitions of different sizes.

SQL> create table my_part_tbl_init_sized (id_column number(6), data_column varchar2(100))
2 partition by range (id_column)
3 (partition p_100 values less than (101) segment creation immediate
4 storage (initial 64K next 64K) tablespace hemant,
5 partition p_200 values less than (201) segment creation immediate
6 storage (initial 1M next 1M) tablespace hemant,
7 partition p_max values less than (maxvalue) segment creation immediate
8 storage (initial 8M next 1M) tablespace hemant)
9 /

Table created.

SQL>
SQL> select partition_name, initial_extent, next_extent, tablespace_name
2 from user_tab_partitions
3 where table_name = upper('my_part_tbl_init_sized')
4 order by partition_position
5 /

PARTITION_NA INITIAL_EXTENT NEXT_EXTENT TABLESPACE_NAME
------------ -------------- ----------- ------------------------------
P_100 65536 65536 HEMANT
P_200 1048576 1048576 HEMANT
P_MAX 8388608 1048576 HEMANT

SQL>
SQL> select partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name = upper('my_part_tbl_init_sized')
4 and segment_type = 'TABLE PARTITION'
5 order by 1
6 /

PARTITION_NA BYTES/1024 EXTENTS
------------ ---------- ----------
P_100 64 1
P_200 1024 1
P_MAX 8192 1

SQL>


However, the caveat about Partition Extent Sizes when SPLIT is used as shown in the third post still applies.

SQL> alter table my_part_tbl_init_sized
2 split partition p_max
3 at (301)
4 into (partition p_300, partition p_max)
5 /

Table altered.

SQL>
SQL> select partition_name, initial_extent, next_extent, tablespace_name
2 from user_tab_partitions
3 where table_name = upper('my_part_tbl_init_sized')
4 order by partition_position
5 /

PARTITION_NA INITIAL_EXTENT NEXT_EXTENT TABLESPACE_NAME
------------ -------------- ----------- ------------------------------
P_100 65536 65536 HEMANT
P_200 1048576 1048576 HEMANT
P_300 8388608 1048576 HEMANT
P_MAX 8388608 1048576 HEMANT

SQL>
SQL> select partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name = upper('my_part_tbl_init_sized')
4 and segment_type = 'TABLE PARTITION'
5 order by 1
6 /

PARTITION_NA BYTES/1024 EXTENTS
------------ ---------- ----------
P_100 64 1
P_200 1024 1
P_300 8192 1
P_MAX 8192 1

SQL>


So, SPLITting P_MAX to create P_300 created P_300 with an 8MB extent because the sizing is inherited from P_MAX.  How do I change the Partition Extent sizes on SPLIT ?

SQL> alter table my_part_tbl_init_sized
2 split partition p_max
3 at (501)
4 into (partition p_500 storage (initial 64K next 64K) tablespace hemant, partition p_max)
5 /

Table altered.

SQL>
SQL> alter table my_part_tbl_init_sized
2 split partition p_500
3 at (401)
4 into (partition p_400, partition p_500)
5 /

Table altered.

SQL>
SQL> select partition_name, initial_extent, next_extent, tablespace_name
2 from user_tab_partitions
3 where table_name = upper('my_part_tbl_init_sized')
4 order by partition_position
5 /

PARTITION_NA INITIAL_EXTENT NEXT_EXTENT TABLESPACE_NAME
------------ -------------- ----------- ------------------------------
P_100 65536 65536 HEMANT
P_200 1048576 1048576 HEMANT
P_300 8388608 1048576 HEMANT
P_400 65536 65536 HEMANT
P_500 65536 65536 HEMANT
P_MAX 8388608 1048576 HEMANT

6 rows selected.

SQL> select partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name = upper('my_part_tbl_init_sized')
4 and segment_type = 'TABLE PARTITION'
5 order by 1
6 /

PARTITION_NA BYTES/1024 EXTENTS
------------ ---------- ----------
P_100 64 1
P_200 1024 1
P_300 8192 1
P_400 64 1
P_500 64 1
P_MAX 8192 1

6 rows selected.

SQL>


This time, when I split P_MAX I deliberately specified a size for P_500.  Then, when I split P_500 further, the new P_400 inherited the sizing from P_500.
,
,
,

Categories: DBA Blogs

Node-oracledb 1.9.1 with Promises and Node 6 Support is on NPM

Christopher Jones - Wed, 2016-05-18 05:14

Top features: Promise support. Node 6 support.

The new node-oracledb 1.9.1 release adds Promise support, it now supports the new Node 6 (as well as previous releases), and it has some welcome stability fixes. Other nice goodies are mentioned below. Thanks to everyone who gave feedback on our 1.9.0 development branch - we couldn't have done it without you.

node-oracledb 1.9.1 is now available on NPM. Installation instructions are here. Documentation is here.

The changes in node-oracledb 1.9.1 since 1.8
  • Promise support was added. All asynchronous functions can now optionally return Promises. When asynchronous functions are passed with a callback function parameter, the familiar callback flow is used. If the callback parameter is omitted, then a Promise is returned. There are some examples in the examples directory, see promises.js and webapppromises.js.

    Node Promises allow a programming paradigm that many Node users find comfortable. Some users had implemented their own Promise wrappers for node-oracledb; a few had even been published on NPM. Now the official implementation makes Promises available to everyone.

    The default Promise library is used for Node 0.12, 4, 5 and 6. It can be easily overridden if you wish to incorporate your own implementation. This is also useful for Node 0.10 where there is no native library.

  • A new toQueryStream() method was added for ResultSets. It lets REF CURSORS be fetched via Readable Streams. See the example refcursortoquerystream.js

    It can also be used with ResultSets from top-level queries, allowing them to be converted to streams. However the existing connection.queryStream() method will probably be easier to use in this scenario.

  • Aliases pool.close() and connection.close() have been added for pool.terminate() and connection.release() respectively. This should make it easier to remember which method to use for releasing a connection, terminating a connection pool, or closing a ResultSet: you can just use close().

  • An experimental query Stream _close() method was added. It allows query streams to be closed without needing to fetch all the data. It is marked experimental because there isn't good information about how safe interrupting the current Node Streams implementation is.

    Calling _close() invokes pause(), if necessary. On Node 0.10 you may get a pause event even if you had previously paused the stream. This is because Node 0.10 doesn't support the isPaused() call.

  • Upgraded to NAN 2.3 for Node 6. This removes the deprecation warnings that Node 6 was generating with the older NAN 2.2.

    Older versions of Node can still be used, of course.

  • Mitigated some annoying, seemingly random crashes if JavaScript objects were garbage collected too early. While many well programmed apps naturally hold the relevant JavaScript objects until no longer required, other apps don't. Premature garbage collection could affect these latter scripts if they finished quickly or when under load.

    A similar change was made to Lob buffers used during LOB insertion.

  • Memory 'improvements':

    • Fixed some memory leaks when using ResultSets.

    • Fixed a memory leak with the Pool queue timer map.

    • Fixed memory release logic when querying LOBs and an error occurs.

  • Removed an extra call to getRows() made by queryStream() at end-of-fetch.

  • Error changes:

    • Some method parameter validation checks, such as the number or types of parameters, will now throw errors synchronously instead of returning errors via the callback.

    • Improved validation for fetchInfo usage.

    • Increased the internal buffer size for Oracle Database error messages.

    • Altered some node-oracledb NJS-xyz error message text for consistency.

  • The test suite is no longer automatically installed when installing with npm install oracledb from NPM. The test suite was (and will be) getting bigger and bigger. It is not being used by most people, so installing it automatically is wasteful.

    You can still run the tests using a GitHub clone. The updated test README has instructions for this. To allow tests to be moved and run anywhere, we left the tests' require('oracledb') calls without a path prefix so you may need to set NODE_PATH as described.

  • Fixed a symbol name redefinition warning for DATA_BLOB when compiling on Windows.

Overall node-oracledb 1.9.1 is a significant milestone with the addition of Promise support, the support for Node 6, and the general resiliency changes. I'm very happy with this release and would encourage upgrading to it.

What's Next?

What's next? There are still lots of enhancements for node-oracledb on the wish list. Please let us know your priorities.

Issues and questions about node-oracledb can be posted on GitHub. Your input helps us schedule work on the add-on. Drop us a line!

Oracle Midlands : Event #15 – Summary

Tim Hall - Wed, 2016-05-18 03:46

Last night was Oracle Midlands Event #15 with Joel Goodman and Martin Widlake.

First up was Joel with a session called, “Oracle Storage Fragmentation”, where he discussed the potential fragmentation issues possible in tablespaces, tables and indexes. If you’ve been a DBA for a long time it’s easy to think everyone knows this stuff, but I get asked questions about this stuff a lot! The session had a good mix of content, with something to keep everyone happy from beginner to old timers. Joel is like a walking encyclopedia of Oracle, so it’s always good to hear him present.

Next up was Martin with a session called, “Performance of PL/SQL functions called from SQL”, where he discussed the pros and cons of calling PL/SQL functions from SQL statements. I like Martin’s presentation style. He’s very self-deprecating and amusing. Of course I am biased because he’s part of the family.

What's New in the Cloud

Pat Shuff - Wed, 2016-05-18 02:07
One thing that the last year has taught me is that things change quickly. One of the biggest challenges is to keep up with this change and figure out what is new and what is not. We are going to take a diversion today and look at changes in the Oracle Public Cloud then get back to provisioning database into different cloud platforms tomorrow. This is important because it helps us define how to differentiate platform as a service from infrastructure as a service with software installed on it. Entries like scale up and scale down of services, DataGuard between two data centers for DBaaS, temporary bursting services to larger instances, various connectors and plug ins for integration and SOA services are examples of PaaS advantages. Many of these features automatically happen or reduce hundreds of commands that needs to be executed to configure a service or integration. Provisioning a database into an IaaS service comes with tradeoffs and sacrifices. It is important to know what added services you are getting when you select PaaS over IaaS. The list of new features helps us understand the added value of PaaS and how we can leverage them.

Let's start with infrastructure and see how things have changed. If you go to the Oracle Public Cloud Documentation you see tabs listing all of the services. For infrastructure this corresponds to compute, storage, and networking. If we click on infrastructure then compute, it takes us to the Compute Documentation. Note that there is a What's New page. At the time of writing this blog, the newest entry is April 2016. The key announcements in this entry include

  • April 2016
    • Oracle Compute Cloud Service — Generally Available (GA)- it was controlled availability
    • 1 OCPU subscription - previous min was 500 OCPUs
    • Bursting - non-metered services can short term double the cores allocated an additional services billed like a metered service
    • Oracle-provided Windows images - Windows 2012 R2
    • Oracle-provided Solaris images - Solaris x86 11.3
    • Cloning storage volumes using snapshots
    • Cloning instances using snapshots
    • Resizing storage volumes - storage can be resized while attached to active instance
    • Private Images page moved to a new tab on the web console
    • Instance IP addresses now shown on the Instances page
    • Improved image upload tool
  • March 2016
    • Changes in the web console for creating storage volumes
    • opc-init documentation - startup initialization scripts when a new image is booted
  • February 2016
    • Oracle Network Cloud Service - VPN for Dedicated Compute
    • Security IP list as the destination in a security rule created using the web console
    • SSH key management actions moved to the Network tab of the web console
    • Summary information displayed for each resource in the web console
    • Simplified navigation and improved performance in the web console - Orchestration tab changed

There isn't a what's new for storage and networking because it is folded into the compute page. Note that there were a few storage entries (resize to an active instance and cloning storage volumes) and network entries (VPN, Security list, SSH key management) in the compute page.

For platform as a service, there is a What's new for DBaaS that details changes to the database as a service and schema as a service options.

  • May 2016
    • Oracle Data Guard available - database creation and replication between data centers
    • Backup and recovery available through the console - previously required ssh access
    • Updated version of Oracle REST Data Services
    • Oracle GlassFish Server removed - services now available through REST services
  • April 2016
    • Configure a service instance’s database as the replication database for Golden Gate
  • March 2016
    • Add an SSH public key to a service instance - allows for multiple ssh keys to an instance
    • Jan 2016 PSU integrated into base image for single-instance databases
    • Jan 2016 bundle patch integrated into base image for Oracle RAC databases
  • February 2016
    • Selectable database character set and national character set during instance creation
    • Jan 2016 PSU available for patching
  • January 2016
    • 2 TB (terabyte) storage volumes now supported
    • Ability to create “temporary” storage volumes using Oracle Compute Cloud Service - storage can be short term added and removed as needed.

In the Application Development area there is a What's New for

  • Application Container Service
    • May 2016
      • New Command-Line Interface
      • New utilities for JavaScript and Node packaging and dependency management
      • New deployment configurations for Java-based applications target Oracle Application Container Cloud Service
      • A new Oracle Developer Cloud Service sample project
    • April 2016
      • Node.js 0.10.x, 0.12.x
      • Oracle Linux 6.6
      • Oracle Java SE 7, 8
  • Developer Cloud Service
    • May 2016
      • Deploy to Oracle Application Container Cloud Service instances
      • Snippets support
      • New Member dialog
      • Home tab remembers your last opened child tab
      • Upload artifacts to the project’s Maven repository from the Code tab
      • View the dependency information for Gradle builds
      • The Code button in the Commits view displays files of the current path
      • More pre-defined standard search queries added in the Merge Request tab
      • Audit Log in the Job Details page
      • Build is triggered on push to Git repository
      • Deploy to Oracle Java Cloud Service using Oracle WebLogic REST APIs
      • Lock a Git repository branch
      • Restrict push and merge actions on a protected branch
      • HipChat Webhook support
  • Java Cloud Service
    • May 2016
      • Manage Oracle platform services from a command line interface (CLI)
      • Create and manage access rules
      • Create service instances that use database deployments with cloud-only backups
      • Flexible usage changes to Oracle Java Cloud Service non-metered subscriptions - additional processors can be short term allocated and billed on a metered basis
    • April 2016
      • Create WebLogic Server 12.2.1 service instances
      • Provision service instances with a domain partition
      • Create service instances that use Oracle Real Application Clusters (RAC) databases
      • New patches are available, WebLogic server, Java Developer Kit
    • March 2016
      • Manage SSH access for service instances
      • Add a second load balancer to a service instance
  • Mobile Cloud Service
    • May 2016
      • Location Platform API
      • Microsoft Azure Active Directory authentication
      • export and import artifacts across MCS instances
      • OAuth and JWT token policies for REST connectors
    • April 2016
      • Facebook credentials or their corporate single-sign on credentials
      • JavaScript SDK has been re-tooled to specifically support browser-based mobile apps
      • Cordova SDK supports hybrid development on the Cordova framework

For Content and Collaboration Services

  • Process Cloud Services
    • April 2016
      • New Process Editor
      • New Data Association editor
      • Transformation editor
      • Business Indicator metrics
      • Business Analytics dashboards
      • Outbound REST Connector editor
      • Document-Initiated Process
      • Web Service Message Protection
      • Security Certificates
      • New REST APIs
      • Workspace Enhancements
      • SSO and Authentication
      • Web Form Snapshots
      • Business Objects from JSON instance

For the Integration Cloud Service

  • Integration Cloud Service
    • April 2016
      • Orchestration support - BPEL Process integration
      • Oracle Sales Cloud Adapter - REST APIs and interface catalog
      • REST Adapter enhancements
      • SAP Adapter - inbound integration support
      • Microsoft SQL Server Adapter - inbound integration support
      • File Adapter - inbound integration support
      • Java Messaging Server Adapter - outbound integration support
      • DocuSign Adapter - outbound integration support
      • SuccessFactors Adapter - outbound integration support
      • ServiceNow Adapter - outbound integration support
      • Oracle Field Service Adapter - inbound and outbound integration support
      • Adapter Portal
      • Search improvements
      • Mapper visual enhancements
      • Execution Agent (on-premises Oracle Integration Cloud Service)
    • March 2016
      • Adobe eSign Adapter - outbound integration support
      • File Adapter - outbound integration support (support for 5 MB)
      • Microsoft SQL Server Adapter - outbound integration support
      • FTP Adapter - secure FTP server support
      • SAP Adapter - TRFC, QRFC, and error document support
      • Oracle Database adapter - inbound integration support
      • Oracle Siebel Adapter - inbound integration support
      • Salesforce Adapter - custom WSDL support
      • REST Adapter - multidimensional, nested array support in JSON documents
      • Scheduler - Delete files upon successful retrieval after an error
      • Large payload support - 10 MB
  • SOA Cloud Service
    • May 2016
      • Oracle Enterprise Scheduler is now available as part of Oracle SOA Cloud Service
      • Three new tutorials
    • March 2016
      • Scale Oracle SOA Cloud Service Nodes
      • Non-Metered Subscriptions
      • Oracle Managed File Service
      • Oracle B2B

For Business Analytics the changes are

  • March 2016
    • File size limit increased to 50MB
    • Visualize data in Oracle Applications
    • Update data sources after upload
    • New ways to present data visualizations; Donut charts, Tile views, Text boxes
    • Enhancements to visualizations; Trends, Color management, Thumbnails, Sort data elements, Filter data
    • Quickly copy report columns with “Save Column As…”
    • Build multiple data models
    • Upload data from Excel spreadsheets and OTBI (Oracle Transactional Business Intelligence) data sources
    • Data Loader deprecated
    • Integrate with multiple data sources
    • Whitelist safe domains
    • Index content and schedule crawls
    • Download the public key for remote data connectivity
    • Updates to the REST API
In summary, it is important to look at the new services and new announcements. Some of the changes are relatively small and of low impact. Other changes provide new features and functions that might change the way that you can leverage cloud services. These pages are updated monthly while the cloud services are typically updated every other week. It is recommended that you get into a routine schedule of checking the What's New links in the documentation. Unfortunately, there is not a single location to look at all of these updates. This blog is an attempt to aggregate the new features for Iaas and PaaS.

XML Gateway Message Designer 2.6.3 Certified on Windows 10 for EBS 12.x

Steven Chan - Wed, 2016-05-18 02:02

Windows 10 logoXML Gateway Message Designer 2.6.3 is certified on Windows 10 for E-Business Suite 12.1 and 12.2.  See:

Windows Compatibility Mode

XML Gateway Message Designer 2.6.3 is a 32-bit application that can be installed on 32-bit and 64-bit Windows 10 versions.  It must be installed with "Windows compatibility mode" set to "Windows XP (Service Pack 2)" or "Windows XP (Service Pack 3)".

Related Articles

Categories: APPS Blogs

Another reason why you should use the Data Guard Broker for your #Oracle Standby

The Oracle Instructor - Wed, 2016-05-18 02:00

The Data Guard Broker is recommended for various reasons, this one is less obvious: It prevents a Split-Brain problem that may otherwise occur in certain situations. Let me show you:

[oracle@uhesse ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Members:
  prima - Primary database
    physt - Physical standby database 

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 18 seconds ago)

This is my setup with 12c, but the demonstrated behavior is the same with 11g already. I will cause a crash of the primary database now, without damaging any files – like a power outage on the primary site:

[oracle@uhesse ~]$ ps -ef | grep smon
oracle    6279     1  0 08:30 ?        00:00:00 ora_smon_prima
oracle    6786     1  0 08:32 ?        00:00:00 ora_smon_physt
oracle    7168  3489  0 08:43 pts/0    00:00:00 grep --color=auto smon
[oracle@uhesse ~]$ kill -9 6279

Don’t do that at home:-) Now the primary is gone, but of course I can failover to the standby:

[oracle@uhesse ~]$ dgmgrl sys/oracle@physt
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.
DGMGRL> failover to physt;
Performing failover NOW, please wait...
Failover succeeded, new primary is "physt"

So far so good, my end users can continue to work now on the new primary. But what happens when the power outage is over and the ex-primary comes back up again?

[oracle@uhesse ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed May 18 08:47:30 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1258291200 bytes
Fixed Size		    2923920 bytes
Variable Size		  452985456 bytes
Database Buffers	  788529152 bytes
Redo Buffers		   13852672 bytes
Database mounted.
ORA-16649: possible failover to another database prevents this database from
being opened

The DMON background process of the new primary communicates with the DMON on the ex-primary, telling it that there cannot be two primary databases within the same Data Guard Broker configuration! Try the same scenario without the broker and you will observe the ex-primary coming up until status OPEN. Just wanted to let you know:-)


Tagged: Data Guard
Categories: DBA Blogs

An Effective Approach to Migrate Dynamic Thrift Data to CQL, Part 1

Pythian Group - Tue, 2016-05-17 16:00
1. Introduction

Recently Pythian’s Cassandra team worked on one customer’s request to copy data of several Cassandra tables between two Cassandra clusters (not Cassandra data centers). The original approach we used to copy data is through Cassandra COPY TO/FROM commands because the size of the data to be copied is not large (several hundred mega-bytes per table). The execution of the commands was successful, but for one of the tables that we did data copy, the application complained about missing data. We examined the data for that table using Cassandra cqlsh utility and found no discrepancy. After a further discussion with the customer, we realized that the source tables were created and manipulated by a Thrift based application and the application can dynamically create different columns for different rows, although each row does share a common set of statically defined columns. It is the data in these dynamic columns that are missing during the data copy process.

We addressed the issue, but in the end we felt that we should write something about Cassandra data migration from Thrift to CQL because this is quite a common problem faced by many existing Cassandra users right now, considering that Cassandra is gradually phasing out Thrift and replacing it with CQL.

This post is the first part of a three-post series. In this post (Part 1), we’re going to dive into the details of Cassandra storage engine (pre-3.0 version) and explore some fundamental concepts that are key to better understanding the discussion in the following posts. In the next post (Part 2), we’re going to explore how Cassandra tables can be defined statically, dynamically, or in a mixed mode in Thrift and what the corresponding table definition in CQL are. In the last post (Part 3), we’ll present an effective approach to migrate dynamically generated data in Thrift into a statically defined CQL table, without suffering any data loss.

1.1. Cassandra Transition from Thrift to CQL API

Apache Thrift is a software framework developed at Facebook for “scalable cross-language services development”. In early days of Cassandra, Thrift base API was the only method to develop Cassandra client applications. But with the maturity of CQL (Cassandra query language), Cassandra is gradually moving away from Thrift API to CQL API. Along with this trend,

  • Thrift based client drivers are not officially supported.
  • Thrift API will not get new Cassandra features; it exists simply for backward compatibility purpose.
  • CQL based “cqlsh” utility is replacing thrift based “cassandra-cli” utility as the main command-line tool to interact with Cassandra.

Please note that the latest version of CQL is 3.x (3.4 as of May, 2016). CQL2 is deprecated and removed for Cassandra 2.2 and later.  In the discussion below, we will simply use CQL to refer to CQL 3.

2. Overview of Internal Cassandra Data Storage Structure

Please note that since Cassandra 3.0, the underlying storage engine for Cassandra has gone through a lot of changes. The discussion in this post series is for pre-3.0 Cassandra (v 2.2 and before).

At very high level, a Cassandra table (or column family by old term) can be seen as a map of sorted map in the following format *:

     Ma<RowKey, SortedMap<ColumnKey, ColumnValue>>

A graphical representation looks like below:

Cassanra_thrift-to-cql-scrnsht1

Please note that although there are more complex structures such as Super Column and Composite Column, the basic idea remains the same and the representation above is good enough for us to describe the problem in this document.

Internally, such a storage structure is where both Thrift and CQL APIs are based. The difference is that Thrift API manipulates the storage structure directly, but CQL API does so through an abstraction layer and expresses the data to user in a tabular form similar to what SQL does for a relational database.

In order to make this clearer, let’s use an example to compare the outputs between cassandra-cli and cqlsh command utilities, which are based on Thrift and CQL protocols separately. The table schema is defined as below (in CQL format):

CREATE TABLE song_tags (
   id uuid,
   tag_name text,
   PRIMARY KEY (id, tag_name)
) WITH COMPACT STORAGE

This simple table is used to maintain the song tags. After inserting several rows in this table, we examined the table content using both Thirft based “cassandra-cli” utility and CQL based “cqlsh” utility. The result is as below:

Cassanra_thrift-to-cql-scrnsht2

From the example above, it can be easily noticed that between Thrift and CQL, the terms “row” and “column” don’t share the same meaning and this causes some confusion when people do Thrift to CQL migration. For example,

  • In Thrift, one row means one data partition that is determined by the partition key definition. Each row has multiple columns, or more precisely “cells”. Each cell contains the time-stamp of when it is created. The name/key of the cell/column is not necessarily the name as defined in the table definition, especially when there are clustering column(s) defined (just as in the example above)
  • In CQL, one row could be one partition, or could be one part of a partition. It really depends on how the partition key and cluster key are designed

Due to such differences, I’m going to use the following terms in this document for clarification:

  • “CQL Row”, or simply “Row”, refers to a row in CQL context
  • “Storage Row”, refers to a row in Thrift context
  • “Column”, refers to a column in both CQL or Thrift contexts
  • “Cell” particularly refers to a column in Thrift context
  • “Table” refers to a table in CQL context, or a column family in Thrift context
  • “Partition” refers to a data partition determined by the hash key. In Thrift context, “Partition” and “Storage Row” has the same meaning. In CQL context, one “Partition” includes multiple “CQL Rows”.
3. Compact Storage

In CQL, a table property, called for COMPACT STRORAGE, is created for backward compatibility.  As the name suggests, tables created with this directive consumes less storage space compared with those created without this directive.

To make this clear, we also uses an example to explain. Basically, two tables are created to keep track of the average student grades for classes in a school. The table definition for them are exactly the same except that one (avg_grade) is defined without COMPACT STORAGE property, but another (avg_grade2) does. The same records of data are also inserted into both tables, as below:

CREATE TABLE avg_grade (
    student_id int,
    class_id int,
    grade double,
    PRIMARY KEY (student_id, class_id)
)

CREATE TABLE avg_grade2 (
    student_id int,
    class_id int,
    grade double,
    PRIMARY KEY (student_id, class_id)
) WITH COMPACT STORAGE

insert into avg_grade(student_id, class_id, grade) values (1, 1, 75.2);
insert into avg_grade(student_id, class_id, grade) values (2, 1, 81.3);

insert into avg_grade2(student_id, class_id, grade) values (1, 1, 75.2);
insert into avg_grade2(student_id, class_id, grade) values (2, 1, 81.3);

The statements are executed in cqlsh utility. The data is then flushed from memory to disk with “nodetool flush” command. After that, sstable2json utility is used to examine the contents of the SSTable data files for both tables. Below is the output:

Cassanra_thrift-to-cql-scrnsht3

From the output above, we can see that tables NOT in compact storage mode has more cells within each Storage Row (e.g. the extra cell with empty value for clustering column “class”) and each Cell stores more metadata (e.g. the name of the “grade” column is added in each of the row). So just from storage perspective, having a table defined in compact storage could save quite some storage space, especially when we’re dealing with many columns and/or with complex column types like collections.

In Thrift, tables are always stored in compact storage mode. In CQL, tables by default are stored in non-compact storage mode, unless the tables are defined with “COMPACT STRORAGE” property. As a result of this, CQL tables without “COMPACT STORAGE” property are not visible in Thrift based utilities like cassandra-cli.

When a CQL table is defined with COMPACT STORAGE property, it gets the benefit of saving some disk space. However, there are also some caveats that need to pay attention to. For example:

  • It cannot have new columns added or existing columns dropped.
  • If it has a compound primary key (multiple columns), then at most one column can be defined as not part of the key.
  • It cannot have a column defined with non-frozen collection types.
    • Note that for people who are not familiar with the concept of “frozen” vs. “non-frozen” collection, a frozen collection serializes all sub-components of the collection into one single value when stored, which is treated as a blob and the whole value must be updated once. On the contrary, A non-frozen collection allows updates on individual fields.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator