Feed aggregator

Oracle WebLogic 12.2.1.x Configuration Guide for Oracle Utilities available

Anthony Shorten - Thu, 2018-06-21 19:06

A new guide whitepaper is now available for use with Oracle Utilities Application Framework based products that support Oracle WebLogic 12.2.1.x and above. The whitepaper walks through the setup of the domain using the Fusion Domain Templates instead of the templates supplied with the product. In future releases, Oracle Utilities Application Framework the product specific domain templates will not be supplied as the Fusion Domain Templates take more of a prominent role in deploying Oracle Utilities products.

The whitepaper covers the following topics:

  • Setting up the Domain for Oracle Utilities products
  • Additional Web Services configuration
  • Configuration of Global Flush functionality in Oracle WebLogic 12.2.1.x
  • Frequently asked installation questions

The whitepaper is available as Oracle WebLogic 12.2.1.x Configuration Guide (Doc Id: 2413918.1) from My Oracle Support.

Unbreakable Enterprise Kernel Release 5 for Oracle Linux 7

Wim Coekaerts - Thu, 2018-06-21 10:08

Yesterday we released the 5th version of our "UEK" package for Oracle Linux 7 (UEKR5). This kernel version is based on a 4.14.x mainline Linux kernel. One of the nice things is that 4.14 is an upstream Long Term Stable kernel version as well as maintained by gregkh.

UEKR5 is a 64-bit only kernel. We released it on x86(-64) and ARM64 (aarch64) and it is supported starting with Oracle Linux 7.

Updating to UEK5 is easy - just add the UEKR5 yum repo and update. We have some release notes posted here and a more detailed blog here.

A lot of new stuff  in UEKR5... we also put a few extra tools in the yum repo that let you make use of these newer features where tool updates are needed. xfsprogs, btrfsprogs, ixpdimm libraries pmemsdk, updated dtrace utils updated bcache, updated iproute etc.

For those that don't remember, we launched the first version of our kernel for Oracle Linux back in 2010 when we launched the 8 socket Exadata system. We have been releasing a new Linux kernel for Oracle Linux on a regular basis ever since. Every Exadata system, in fact every Oracle Engineered system that runs Linux uses Oracle Linux and uses one of the versions of UEK inside. So for customers, it's the most tested kernel out there, you can run the exact same OS software stack as we run, on our biggest and fastest database servers, on-premises or in the cloud, and in fact, run the exact same OS software stack as we run inside Oracle Cloud in general. That's pretty unique compared to other vendors where the underlying stack is a black box. Not here.

10/2010 - 2.6.32 [UEK] OL5/OL6 03/2012 - 2.6.39 [UEKR2] OL5/OL6 10/2013 - 3.8 [UEKR3] OL6/OL7 01/2016 - 4.1 [UEKR4] OL6/OL7 06/2018 - 4.14 [UEKR5] OL7/

The source code for UEKR5 (as has been the case since day 0) is fully available publicly, the entire git repo is there with changelog, all the patches are there with all the changelog history - not just some tar file with patchfiles on top of tar files to obfuscate? things for some reason. It's all just -right there-. In fact we recently even moved our kernel gitrepo to github.

Have at it.

 

Demo: GraphQL with node-oracledb

Christopher Jones - Thu, 2018-06-21 09:18

Some of our node-oracledb users recently commented they have moved from REST to GraphQL so I thought I'd take a look at what it is all about.

I can requote the GraphQL talking points with the best of them, but things like "Declarative Data Fetching" and "a schema with a defined type system is the contract between client and server" are easier to undstand with examples.

In brief, GraphQL:

  • Provides a single endpoint that responds to queries. No need to create multiple endpoints to satisfy varying client requirements.

  • Has more flexibility and efficiency than REST. Being a query language, you can adjust which fields are returned by queries, so less data needs to be transfered. You can parameterize the queries, for example to alter the number of records returned - all without changing the API or needing new endpoints.

Let's look at the payload of a GraphQL query. This query with the root field 'blog' asks for the blog with id of 2. Specifically it asks for the id, the title and the content of that blog to be returned:

{ blog(id: 2) { id title content } }

The response from the server would contain the three request fields, for example:

{ "data": { "blog": { "id": 2, "title": "Blog Title 2", "content": "This is blog 2" } } }

Compare that result with this query that does not ask for the title:

{ blog(id: 2) { id content } }

With the same data, this would give:

{ "data": { "blog": { "id": 2, "content": "This is blog 2" } } }

So, unlike REST, we can choose what data needs to be transferred. This makes clients more flexible to develop.

Let's looks at some code. I came across this nice intro blog post today which shows a basic GraphQL server in Node.js. For simplicity its data store is an in-memory JavaScript object. I changed it to use an Oracle Database backend.

The heart of GraphQL is the type system. For the blog example, a type 'Blog' is created in our Node.js application with three obvious values and types:

type Blog { id: Int!, title: String!, content: String! }

The exclamation mark means a field is required.

The part of the GraphQL Schema to query a blog post by id is specified in the root type 'Query':

type Query { blog(id: Int): Blog }

This defines a capability to query a single blog post and return the Blog type we defined above.

We may also want to get all blog posts, so we add a "blogs" field to the Query type:

type Query { blog(id: Int): Blog blogs: [Blog], }

The square brackets indicates a list of Blogs is returned.

A query to get all blogs would be like:

{ blogs { id title content } }

You can see that the queries include the 'blog' or 'blogs' field. We can pass all queries to the one endpoint and that endpoint will determin how to handle each. There is no need for multiple endpoints.

To manipulate data requires some 'mutations', typically making up the CUD of CRUD:

input BlogEntry { title: String!, content: String! } type Mutation { createBlog(input: BlogEntry): Blog!, updateBlog(id: Int, input: BlogEntry): Blog!, deleteBlog(id: Int): Blog! }

To start with, the "input" type allows us to define input parameters that will be supplied by a client. Here a BlogEntry contains just a title and content. There is no id, since that will be automatically created when a new blog post is inserted into the database.

In the mutations, you can see a BlogEntry type is in the argument lists for the createBlog and updateBlog fields. The deleteBlog field just needs to know the id to delete. The mutations all return a Blog. An example of using createBlog is shown later.

Combined, we represent the schema in Node.js like:

const typeDefs = ` type Blog { id: Int!, title: String!, content: String! } type Query { blogs: [Blog], blog(id: Int): Blog } input BlogEntry { title: String!, content: String! } type Mutation { createBlog(input: BlogEntry): Blog!, updateBlog(id: Int, input: BlogEntry): Blog!, deleteBlog(id: Int): Blog! }`;

This is the contract, defining the data types and available operations.

In the backend, I decided to use Oracle Database 12c's JSON features. There's no need to say that using JSON gives developers power to modify and improve the schema during the life of an application:

CREATE TABLE blogtable (blog CLOB CHECK (blog IS JSON)); INSERT INTO blogtable VALUES ( '{"id": 1, "title": "Blog Title 1", "content": "This is blog 1"}'); INSERT INTO blogtable VALUES ( '{"id": 2, "title": "Blog Title 2", "content": "This is blog 2"}'); COMMIT; CREATE UNIQUE INDEX blog_idx ON blogtable b (b.blog.id); CREATE SEQUENCE blog_seq START WITH 3;

Each field of the JSON strings corresponds to the values of the GraphQL Blog type. (The 'dotted' notation syntax I'm using in this post requires Oracle DB 12.2, but can be rewritten for 12.1.0.2.)

The Node.js ecosystem has some powerful modules for GraphQL. The package.json is:

{ "name": "graphql-oracle", "version": "1.0.0", "description": "Basic demo of GraphQL with Oracle DB", "main": "graphql_oracle.js", "keywords": [], "author": "christopher.jones@oracle.com", "license": "MIT", "dependencies": { "oracledb": "^2.3.0", "express": "^4.16.3", "express-graphql": "^0.6.12", "graphql": "^0.13.2", "graphql-tools": "^3.0.2" } }

If you want to see the full graphql_oracle.js file it is here.

Digging into it, the application has some 'Resolvers' to handle the client calls. From Dhaval Nagar's demo, I modified these resolvers to invoke new helper functions that I created:

const resolvers = { Query: { blogs(root, args, context, info) { return getAllBlogsHelper(); }, blog(root, {id}, context, info) { return getOneBlogHelper(id); } }, [ . . . ] };

To conclude the GraphQL part of the sample, the GraphQL and Express modules hook up the schema type definition from above with the resolvers, and start an Express app:

const schema = graphqlTools.makeExecutableSchema({typeDefs, resolvers}); app.use('/graphql', graphql({ graphiql: true, schema })); app.listen(port, function() { console.log('Listening on http://localhost:' + port + '/graphql'); })

On the Oracle side, we want to use a connection pool, so the first thing the app does is start one:

await oracledb.createPool(dbConfig);

The helper functions can get a connection from the pool. For example, the helper to get one blog is:

async function getOneBlogHelper(id) { let sql = 'SELECT b.blog FROM blogtable b WHERE b.blog.id = :id'; let binds = [id]; let conn = await oracledb.getConnection(); let result = await conn.execute(sql, binds); await conn.close(); return JSON.parse(result.rows[0][0]); }

The JSON.parse() call nicely converts the JSON string that is stored in the database into the JavaScript object to be returned.

Starting the app and loading the endpoint in a browser gives a GraphiQL IDE. After entering the query on the left and clicking the 'play' button, the middle pane shows the returned data. The right hand pane gives the API documentation:

To insert a new blog, the createBlog mutation can be used:

If you want to play around more, I've put the full set of demo-quality files for you to hack on here. You may want to look at the GraphQL introductory videos, such as this comparison with REST.

To finish, GraphQL has the concept of real time updates with subscriptions, something that ties in well with the Continous Query Notification feature of node-oracledb 2.3. Yay - something else to play with! But that will have to wait for another day. Let me know if you beat me to it.

Oracle Introduces New Java SE Subscription Offering for Broader Enterprise Java Support

Oracle Press Releases - Thu, 2018-06-21 09:00
Press Release
Oracle Introduces New Java SE Subscription Offering for Broader Enterprise Java Support Java SE Subscription Provides Licensing and Support for Java SE on Servers, Desktops, and Cloud Deployments

Redwood Shores Calif—Jun 21, 2018

In order to further support the millions of worldwide businesses running Java in production, Oracle today announced Java SE Subscription, a new subscription model that covers all Java SE licensing and support needs. Java SE Subscription removes enterprise boardroom concerns around mission critical, timely, software performance, stability and security updates. Java SE Subscription complements Oracle’s long-standing and continued free Java SE releases and stewardship of the OpenJDK ecosystem where Oracle now produces open source OpenJDK binaries, enabling developers and organizations that do not need commercial support or enterprise management tools.

Java SE Subscription provides commercial licensing, including commercial features and tools such as the Java Advanced Management Console to identify, manage and tune Java SE desktop use across the enterprise. It also includes Oracle Premier Support for current and previous Java SE versions.  For further details please visit FAQ list at: http://www.oracle.com/technetwork/java/javaseproducts/overview/javasesubscriptionfaq-4891443.html

“Companies want full flexibility over when and how they update their production applications.” Said Georges Saab, VP Java Platform Group at Oracle. “Oracle is the world’s leader in providing both open source and commercially supported Java SE innovation, stability, performance and security updates for the Java Platform. Our long-standing investment in Java SE ensures customers get predictable and timely updates.”

“The subscription model for updates and support has been long established in the Linux ecosystem. Meanwhile people are increasingly used to paying for services rather than products.” said James Governor, analyst and co-founder of RedMonk. “It’s natural for Oracle to offer a monthly Java SE subscription to suit service-based procurement models for enterprise customers.”

"At Gluon we are strong believers in commercial support offerings around open source software, as it enables organizations to continue to produce software, and the developer community to ensure that they have access to the source code." said Johan Vos, Co-founder and CTO of Gluon. "Today's announcement from Oracle ensures those in the Java Community that need an additional level of support can receive it, and ensures that Java developers can still leverage the open-source software for creating their software. The Java SE Subscription model from Oracle is complementary to how companies like Gluon tailor their solutions around Java SE, Java EE and JavaFX on mobile, embedded and desktop."

To learn more about Java SE Subscription, please visit https://www.oracle.com/java/java-se-subscription.html. Java is the world’s most popular programming language, with over 12 million developers running Java. Java is also the #1 developer choice for cloud, with over 21 billion cloud-connected Java virtual machines.

Contact Info
Alex Shapiro
Oracle
+1 415-608-5044
alex.shapiro@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Alex Shapiro

  • +1 415-608-5044

Kscope18: It's a Wrap!

Rittman Mead Consulting - Thu, 2018-06-21 08:23
 It's a Wrap!

As announced few weeks back I represented Rittman Mead at ODTUG's Kscope18 hosted in the magnificent Walt Disney World Dolphin Resort. It's always hard to be credible when telling people you are going to Disneyworld for work but Kscope is a must-go event if you are in the Oracle landscape.

 It's a Wrap!

In the Sunday symposium Oracle PMs share hints about the products latest capabilities and roadmaps, then three full days of presentations spanning from the traditional Database, EPM and BI tracks to the new entries like Blockchain. On top of this the opportunity to be introduced to a network of Oracle experts including Oracle ACEs and Directors, PMs and people willing to share their experience with Oracle (and other) tools.

Sunday Symposium and Presentations

I attended the Oracle Analytics (BI and Essbase) Sunday Symposium run by Gabby Rubin and Matt Milella from Oracle. It was interesting to see the OAC product enhancements and roadmap as well as the feature catch-up in the latest release of OBIEE on-premises (version 12.2.1.4.0).

As expected, most of the push is towards OAC (Oracle Analytics Cloud): all new features will be developed there and eventually (but assurance on this) ported in the on-premises version. This makes a lot of sense from Oracle's point of view since it gives them the ability to produce new features quickly since they need to be tested only against a single set of HW/SW rather than the multitude they are supporting on-premises.

Most of the enhancements are expected in the Mode 2/Self Service BI area covered by Oracle Analytics Cloud Standard since a) this is the overall trend of the BI industry b) the features requested by traditional dashboard style reporting are well covered by OBIEE.
The following are just few of the items you could expect in future versions:

  • Recommendations during the data preparation phase like GeoLocation and Date enrichments
  • Data Flow enhancements like incremental updates or parametrized data-flows
  • New Visualizations and in general more control over the settings of the single charts.

In general Oracle's idea is to provide a single tool that meets both the needs of Mode 1 and Mode 2 Analytics (Self Service vs Centralized) rather than focusing on solving one need at a time like other vendors do.

Special mention to the Oracle Autonomous Analytics Cloud, released few weeks ago, that differs from traditional OAC for the fact that backups, patching and service monitoring are now managed automatically by Oracle thus releasing the customer from those tasks.

During the main conference days (mon-wed) I assisted a lot of very insightful presentations and the Oracle ACE Briefing which gave me ideas for future blog posts, so stay tuned! As written previously I had two sessions accepted for Kscope18: "Visualizing Streams" and "DevOps and OBIEE: Do it Before it's too late", in the following paragraph I'll share details (and link to the slides) of both.

Visualizing Streams

One of the latest trends in the data and analytics space is the transition from the old style batch based reporting systems which by design were adding a delay between the event creation and the appearance in the reporting to the concept of streaming: ingesting and delivering event information and analytics as soon as the event is created.

 It's a Wrap!

The session explains how the analytics space changed in recent times providing details on how to setup a modern analytical platform which includes streaming technologies like Apache Kafka, SQL based enrichment tools like Confluent's KSQL and connections to Self Service BI tools like Oracle's Data Visualization via sql-on-Hadoop technologies like Apache Drill. The slides of the session are available here.

DevOps and OBIEE: Do it Before it's Too Late

In the second session, slides here, I've been initially going through the motivations of applying DevOps principles to OBIEE: the self service BI wave started as a response to the long time to delivery associated with the old school centralized reporting projects. Huge monolithic sets of requirements to be delivered, no easy way to provide development isolation, manual testing and code promotion were only few of the stoppers for a fast delivery.

 It's a Wrap!

After an initial analysis of the default OBIEE developments methods, the presentation explains how to apply DevOps principles to an OBIEE (or OAC) environment and precisely:

  • Code versioning techniques
  • Feature-driven environment creation
  • Automated promotion
  • Automated regression testing

Providing details on how the Rittman Mead BI Developer Toolkit, partially described here, can act as an accelerator for the adoption of these practices in any custom OBIEE implementation and delivery process.

As mentioned before, the overall Kscope experience is great: plenty of technical presentation, roadmap information, networking opportunities and also much fun! Looking forward to Kscope19 in Seattle!

Categories: BI & Warehousing

Intercollegiate Tennis Association and Oracle Announce Multi-Year Extension

Oracle Press Releases - Thu, 2018-06-21 07:00
Press Release
Intercollegiate Tennis Association and Oracle Announce Multi-Year Extension

TEMPE, Ariz. and Redwood Shores, Calif.—Jun 21, 2018

The Intercollegiate Tennis Association and Oracle are excited to announce a multi-year extension to their alliance, as Oracle continues to strengthen its ongoing commitment to collegiate tennis.

The Oracle ITA alliance includes Oracle’s ongoing sponsorship of the Oracle ITA Collegiate Tennis Rankings, the Oracle ITA Masters and Oracle ITA National Fall Championships, while adding title sponsorships to the ITA Summer Circuit (now branded as the Oracle ITA Summer Circuit Powered By UTR) and the Division I and Division III National Team Indoor Championships.

“Our partnership with ITA has been a great success to date, and we’re eager to keep expanding the game,” said Oracle CEO Mark Hurd. “We want to ensure that young players understand that collegiate tennis offers terrific opportunities to improve their games, play in great venues in a team environment, all while getting an education that will serve them well for the rest of their lives.”

ITA CEO Timothy Russell added, “The ITA is thrilled to be continuing our wonderful working relationship with Oracle; an incredibly innovative company with an astonishing forward-thinking CEO. Both parties are committed to positively shaping the future of college tennis. Oracle’s attention to creating events of high distinction, in which the best players in college want to participate and fans want to watch, either in person or from the comfort of their own home via television and live streaming, is elevating our game.”

The newly-christened Oracle ITA Summer Circuit Powered by UTR will serve as a model for level-based play in the near 50 tournaments contested during the Summer Circuit’s six-week duration. The Oracle ITA Summer Circuit Powered by UTR, which began in 1993, provides college tennis players, along with junior players, alumni and young aspiring professionals, the opportunity to compete in organized events during the summer months. For the third consecutive year, the Oracle ITA Summer Circuit Powered by UTR will feature nearly 50 tournaments across 23 different states, during a six-week stretch from late June to the end of July. The circuit will culminate at the ITA National Summer Championships, hosted by TCU from August 10-14, which will feature prize money for the first time.

“The ITA Summer Circuit is yet another great opportunity to influence the quality of American tennis and Oracle is excited to play a part in it,” said Hurd. “The summer circuit is the ideal opportunity for all players, from collegians to juniors, to play competitively year-round.”

Oracle will now have an expanded presence in the dual-match portion of the college tennis schedule by becoming the title sponsor of all four National Team Indoor Championships. Contested during the months of February and March, the Oracle ITA National Team Indoor Championships feature 16 of the nation’s top men’s and women’s teams from Division I, and eight highly-ranked men’s and women’s Division III teams vying for a national indoor title.

“We are excited that Oracle will serve as the title sponsor for the National Team Indoor Championships,” said Russell. “The National Team Indoor Championships feature elite fields and stand as a good season-opening barometer for how the dual-match season will play out.”

Serving as the culmination to the fall season, the Oracle ITA National Fall Championships will take place November 1-5, 2018, at the Surprise Tennis & Racquet Complex in Surprise, Arizona, which recently hosted the 2018 NCAA Division II National Championships and previously hosted the 2016 ITA Small College Championships.

The Oracle ITA National Fall Championships features 128 of the nation’s top collegiate singles players (64 men and 64 women) and 64 doubles teams (32 men’s team and 32 women’s teams). In its second year, having replaced the ITA National Indoor Intercollegiate Championships, it is the lone event on the collegiate tennis calendar to feature competitors from all five divisions playing in the same tournament.

Created in 2015, the Oracle ITA Masters has established itself as one of the premier events of the collegiate tennis season. The Oracle ITA Masters features singles draws of 32 for men and women, and a mixed doubles event with a 32-draw. Players are chosen based upon conference representation, similar to the NCAA Tournament.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
212-508-7935
deborah.hellinger@oracle.com
Dan Johnson
ITA Marketing and Communications
303-579-4878
djohnson@itatennis.com
About the ITA

The Intercollegiate Tennis Association (ITA) is committed to serving college tennis and returning the leaders of tomorrow. As the governing body of college tennis, the ITA oversees women’s and men’s varsity tennis at NCAA Divisions I, II and III, NAIA and Junior/Community College divisions. The ITA administers a comprehensive awards and rankings program for men's and women’s varsity players, coaches and teams in all divisions, providing recognition for their accomplishments on and off the court. For more information on the ITA, visit the ITA website at www.itatennis.com, like the ITA on Facebook or follow @ITA_Tennis on Twitter and Instagram.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • 212-508-7935

Dan Johnson

  • 303-579-4878

Oracle Partner PaaS Summer Camps VIII - August 27 - 31, 2018

The Oracle PaaS Summer Camp is a one week training for cutting-edge software consultants, engineers and enterprise-level professionals. The #PaaSSummerCamp brings together the world’s leading...

We share our skills to maximize your revenue!
Categories: DBA Blogs

DBA_HIST_SQLSTAT and GV$SQL

Tom Kyte - Wed, 2018-06-20 19:46
Hi, I was trying to create a dashboard comparing historical executions and current executions of multiple SQL statements. I have noticed some differences between stats in GV$SQL and DBA_HIST_SQLSTAT. Could you please help us to understand below po...
Categories: DBA Blogs

Need help in formulating query to fetch previous quote times

Tom Kyte - Wed, 2018-06-20 19:46
Hi AskTom Team, I have been a big fan of this site since 1999 around the time it came up. First of all, again a big Thank you for your support to Oracle Community since past two decades. I have immensely benefited from this. This time arou...
Categories: DBA Blogs

Partitioned table cleanup

Tom Kyte - Wed, 2018-06-20 19:46
Hi I have a table that was created for debugging purposes. Every night a jobs kicks off creating a partition of the days inserts on the table based on date. Needless to say have the partitions grown rapidly and have taken up a lot space in the tab...
Categories: DBA Blogs

Implementing Master/Detail in Oracle Visual Builder Cloud Service

Shay Shmeltzer - Wed, 2018-06-20 18:29

This is a quick demo that combines two techniques I showed in previous blogs - filtering lists, and accessing the value of a  selected row in a table. Leveraging these two together it's quite easy to crate a page that has two tables on it - one is the parent and the other is the child, once you select a record in the parent the child table will update to see only the related child records.

Here is a quick demo:

The two steps we are doing are:

  • Create an action flow on the change of first-selected-row attribute of the table
  • In the flow use the assign variable function to set the filterCriterion of the child table to check for the value selected in the master

As you can see - quite simple.

 

Categories: Development

Error!?! What's going in APEX? The easiest way to Debug and Trace an Oracle APEX session

Dimitri Gielis - Wed, 2018-06-20 13:55
There are some days you just can't explain the behaviour of the APEX Builder or your own APEX Application. Or you recognize this sentence of your end-user? "Hey, it doesn't work..."

In Oracle APEX 5.1 and 18.1, here's how you start to see in the land of the blinds :)

Logged in as a developer in APEX, go to Monitor Activity:


 From there go to Active Sessions:



You will see all active sessions at that moment. Looking at the Session Id or Owner (User) you can identify the session easily:


Clicking on the session id shows the details: which page views have been done, which calls, the session state information and the browser they are using.

But even more interesting, you can set the Debug Level for that session :)


When the user requests a new page or action, you see a Debug ID of that request.


Clicking on the Debug ID, you see straight away all the debug info and hopefully it gives you more insight why something is not behaving as expected.



A real use case: custom APEX app

I had a real strange issue which I couldn't explain at first... an app that was running for several years suddenly didn't show info in a classic report, it got "no data found". When logging out and back in, it would show the data in the report just fine. The user said it was not consistent, sometimes it works, sometimes not... even worse, I couldn't reproduce the issue. So I told her to call me whenever it happened again.
One day she calls, so I followed the above to set debug on for her session and then I saw it... the issue was due to pagination. In a previous record she had paginated to the "second page", but for the current record there was no "second page". With the debug information I could see exactly why it was behaving like that... APEX rewrote the query rows > :first_row, which was set to 16, but for that specific record there were not more than 16 records, so it would show no data found.
Once I figured that out, I could quickly fix the issue by Resetting Pagination on opening of the page.

Debug Levels

You can set different Debug Levels. Level 9 (= APEX Trace) gives you most info whereas debug level 1, only shows the errors, but not much other info. I typically go with APEX Trace (level 9).

The different debug levels with the description:


Trace Mode

In case you want to go a step futher you can also set Trace Mode to SQL Trace.


This will do behind the scenes: alter session set events '10046 trace name context forever, level 12’;
To find out where the trace file is stored, go to SQL Workshop > SQL Scripts and run

SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';

It will return the path of the trace file. When looking into that directory you want to search for the filename which contains the APEX session id (2644211946422) and time you ran the trace.


In Oracle SQL Developer you can then look at those trace files a bit more easily. You can also use TKPROF or other tools.


When I really have performance issues and I need to investigate further, I like to use Method R Workbench. The Profiler interpretes the trace file(s) and gives an explanation what's going on.


And with the different tools on the left, you can drill down in the files.


I'm definitely not a specialist in reading those trace files, but the above tools really help me understanding them. When I'm really stuck I contact Cary Millsap - or I call him Mr Trace - he's the father of those tools and knows trace files inside out :)

A second use case: APEX Builder

I was testing our APEX Office Print plugin in APEX 18.1 and for some reason APEX was behaving differently than earlier versions, but I didn't understand why. I followed the above method again to turn debug and trace on for my own session - so even when you are in the APEX Builder you can see what APEX is doing behind the scenes.


Debugging and Tracing made easy

I hope by this post you see the light when you are in the dark. Let the force be with you :)

Categories: Development

Dealing with automatic restart and SQL Docker containers

Yann Neuhaus - Wed, 2018-06-20 12:57

A couple of weeks ago, a customer asked me how to restart containers automatically after a reboot of the underlying host. In his context, it was not an insignificant question because some containers are concerned by SQL Server databases and he wanted to stay relaxed as long as possible even after a maintenance of the Linux host by sysadmins. The concerned (DEV) environment doesn’t include container orchestration like Swarm or Kubernetes.

blog 139 - 0 - banner

The interesting point is there are several ways to perform the job according to the context. Let’s say I was concerned by services outside Docker that are depend of the containerized database environment.

The first method is a purely sysadmin solution that includes systemd which is a Linux process manager that can be used to automatically restart services that fail with restarting policy values as no, on-success, on-failure, on-abnormal, on-watchdog, on-abort, or always. The latter fits well with my customer scenario.

Is there advantage to use this approach? Well, in my customer context some services outside docker are dependent of the SQL container and using systemd is a good way to control dependencies.

Below the service unit file used during my mission and I have to give credit to the SQL Server Customer Advisory team who published an example of this file included in their monitoring solution based on InfluxDB, Grafana and collectd. The template file includes unit specifiers that make it generic. I just had to change the name of the system unit file accordingly to which container I wanted to control.

[Unit]
Description=Docker Container %I
Requires=docker.service
After=docker.service

[Service]
TimeoutStartSec=0
Restart=always
ExecStart=/usr/bin/docker start -a %i
ExecStop=/usr/bin/docker stop -t 2 %i

[Install]
WantedBy=default.target

 

Let’s say I have one SQL Server container named sql. The next step will consist in copying the service template to /etc/systemd/system and changing the service name accordingly to the SQL container name. Thus, we may now benefit from the systemctl command capabilities

$ sudo cp ./service-template /etc/systemd/system/docker-container@sql.service
$ systemctl daemon-reload
$ sudo systemctl enable docker-container@sql

 

That’s it. I may get the status of my new service as following

$ sudo systemctl status docker-container@sql

 

blog 139 - 1 - systemctl status docker container

 

I can also stop and start my SQL docker container like this:

[clustadmin@docker3 ~]$ sudo systemctl stop docker-container@sql
[clustadmin@docker3 ~]$ docker ps -a
CONTAINER ID        IMAGE                                   COMMAND                  CREATED             STATUS                     PORTS               NAMES
9a8cad6f21f5        microsoft/mssql-server-linux:2017-CU7   "/opt/mssql/bin/sqls…"   About an hour ago   Exited (0) 7 seconds ago                       sql

[clustadmin@docker3 ~]$ sudo systemctl start docker-container@sql
[clustadmin@docker3 ~]$ docker ps
CONTAINER ID        IMAGE                                   COMMAND                  CREATED             STATUS              PORTS                    NAMES
9a8cad6f21f5        microsoft/mssql-server-linux:2017-CU7   "/opt/mssql/bin/sqls…"   About an hour ago   Up 5 seconds        0.0.0.0:1433->1433/tcp   sql

 

This method met my customer requirement but I found one drawback in a specific case when I stop my container from systemctl command and then I restart it by using docker start command. Thus the status is not reported correctly (Active = dead) and I have to run systemctl restart command against my container to go back to normal. I will probably update this post or to write another one after getting some information on this topic or just feel free to comments: I’m willing to hear about you!

 

The second method I also proposed to my customer for other SQL containers without any external dependencies was to rely on the Docker container restart policy capability. This is a powerful feature and very simple to implement with either docker run command or Dockerfile as follows:

docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=P@$$w0rd1' -p 1433:1433 --restart=unless-stopped -d microsoft/mssql-server-linux:2017-CU7

 

Restart-policy values as Always and unless-stopped fit well with my customer scenario even if I prefer the latter option because it provides another level of control if you manually decide to stop the container for any reasons.

I will voluntary omit the third method that consist in installing systemd directly into the container because it is not recommended by Docker itself and not suitable with my customer case as well.

See you!

 

 

 

Cet article Dealing with automatic restart and SQL Docker containers est apparu en premier sur Blog dbi services.

Migrating from ASMLIB to ASMFD

Yann Neuhaus - Wed, 2018-06-20 12:33

Before Oracle 12.1 the methods used to configure ASM were
• udev
• asmlib
Oracle 12.1 comes with a new method called Oracle ASM Filter Driver (Oracle ASMFD).
In Oracle documentation we can find following:
Oracle ASM Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks. Oracle ASM uses the filter driver to validate write I/O requests to Oracle ASM disks.
The Oracle ASMFD simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
The Oracle ASM Filter Driver rejects any I/O requests that are invalid. This action eliminates accidental overwrites of Oracle ASM disks that would cause corruption in the disks and files within the disk group. For example, the Oracle ASM Filter Driver filters out all non-Oracle I/Os which could cause accidental overwrites.

In the following blog I am going to migrate from asmlib to asmfd. I am using a cluster 12.1 with 2 nodes.

Below we present our actual configuration.

[root@rac12a ~]# crsctl check cluster -all
**************************************************************
rac12a:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac12b:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@rac12a ~]#


[root@rac12a ~]# crsctl get cluster mode status
Cluster is running in "flex" mode
[root@rac12a ~]#

[root@rac12a ~]# ps -ef | grep pmon
grid      7217     1  0 11:20 ?        00:00:00 asm_pmon_+ASM1
grid      8070     1  0 11:21 ?        00:00:00 apx_pmon_+APX1
oracle    8721     1  0 11:22 ?        00:00:00 ora_pmon_mydb_1
root     14395  2404  0 11:32 pts/0    00:00:00 grep --color=auto pmon
[root@rac12a ~]#

First let’s get information about our ASM disks. We will use these outputs later to migrate the disks to ASMFD disks

[root@rac12a ~]# oracleasm listdisks | xargs oracleasm querydisk -p             
Disk "ASM_DATA" is a valid ASM disk
/dev/sdc1: LABEL="ASM_DATA" TYPE="oracleasm"
Disk "ASM_DIVERS" is a valid ASM disk
/dev/sdd1: LABEL="ASM_DIVERS" TYPE="oracleasm"
Disk "ASM_OCR1" is a valid ASM disk
/dev/sdg1: LABEL="ASM_OCR1" TYPE="oracleasm"
Disk "ASM_OCR2" is a valid ASM disk
/dev/sdi1: LABEL="ASM_OCR2" TYPE="oracleasm"
Disk "ASM_VOT1" is a valid ASM disk
/dev/sde1: LABEL="ASM_VOT1" TYPE="oracleasm"
Disk "ASM_VOT2" is a valid ASM disk
/dev/sdh1: LABEL="ASM_VOT2" TYPE="oracleasm"
Disk "ASM_VOT3" is a valid ASM disk
/dev/sdf1: LABEL="ASM_VOT3" TYPE="oracleasm"
[root@rac12a ~]#

To migrate to ASMFD, we first have to change the value of the parameter diskstring for the ASM instance. The actual value can be get by using

[grid@rac12a trace]$ asmcmd dsget
parameter:ORCL:*
profile:ORCL:*
[grid@rac12a trace]$

Let’s set the new value on both nodes

grid@rac12a trace]$ asmcmd dsset 'ORCL:*','AFD:*'

We can then verify

[grid@rac12a trace]$ asmcmd dsget
parameter:ORCL:*, AFD:*
profile:ORCL:*,AFD:*
[grid@rac12a trace]$

Once the new value of the diskstring set, let stop the cluster on both nodes

[root@rac12a ~]# crsctl stop cluster
[root@rac12b ~]# crsctl stop cluster

Once the cluster is stopped we have to disable and stop asmlib on both nodes

[root@rac12a ~]# systemctl disable oracleasm
Removed symlink /etc/systemd/system/multi-user.target.wants/oracleasm.service.

[root@rac12a ~]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

[root@rac12a ~]# oracleasm exit
Unmounting ASMlib driver filesystem: /dev/oracleasm
Unloading module "oracleasm": oracleasm
[root@rac12a ~]#

[root@rac12a ~]# ls -ltr /dev/oracleasm/
total 0
[root@rac12a ~]#

Now let’s remove all packages relative to ASMLIB on both nodes

[root@rac12a oracle]# rpm -e oracleasm-support-2.1.11-2.el7.x86_64 oracleasmlib-2.0.12-1.el7.x86_64
warning: /etc/sysconfig/oracleasm saved as /etc/sysconfig/oracleasm.rpmsave
[root@rac12a oracle]#

The next step is to stop acfsload on both nodes

[root@rac12a ~]# lsmod | grep acfs
oracleacfs           3343483  0
oracleoks             500109  2 oracleacfs,oracleadvm
[root@rac12a ~]#

[root@rac12a ~]# acfsload stop
[root@rac12a ~]# lsmod | grep acfs
[root@rac12a ~]#

As root, we can now configure Oracle ASMFD to filter at the node level. In my case steps were done on both nodes

[root@rac12a oracle]# asmcmd afd_configure
Connected to an idle instance.
AFD-627: AFD distribution files found.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
[root@rac12a oracle]#

Once the configuration done, we can check AFD state on all nodes

[root@rac12a oracle]# asmcmd afd_state
Connected to an idle instance.
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'DISABLED' on host 'rac12a.localdomain'
[root@rac12a oracle]#

We can see that afd module is loaded but the filtering is disabled. We then have to edit the oracleafd.conf to enable the filtering

[root@rac12a etc]# cat oracleafd.conf
afd_diskstring='/dev/sd*1'

And then we have to run on both nodes

[root@rac12a etc]# asmcmd afd_filter -e
Connected to an idle instance.
[root@rac12a etc]#

[root@rac12b ~]#  asmcmd afd_filter -e
Connected to an idle instance.
[root@rac12b ~]#

Running again the afd_state command, we can confirm that the filtering is now enabled.

[root@rac12a etc]# asmcmd afd_state
Connected to an idle instance.
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'rac12a.localdomain'
[root@rac12a etc]#

Now we can migrate all asm disks.

[root@rac12a etc]# asmcmd afd_label ASM_DATA /dev/sdc1 --migrate
Connected to an idle instance.
[root@rac12a etc]# asmcmd afd_label ASM_DIVERS /dev/sdd1 --migrate
Connected to an idle instance.
[root@rac12a etc]# asmcmd afd_label ASM_OCR1 /dev/sdg1 --migrate
Connected to an idle instance.
[root@rac12a etc]# asmcmd afd_label ASM_OCR2 /dev/sdi1 --migrate
Connected to an idle instance.
[root@rac12a etc]# asmcmd afd_label ASM_VOT1 /dev/sde1 --migrate
Connected to an idle instance.
[root@rac12a etc]# asmcmd afd_label ASM_VOT2 /dev/sdh1 --migrate
Connected to an idle instance.
[root@rac12a etc]# asmcmd afd_label ASM_VOT3 /dev/sdf1 --migrate
Connected to an idle instance.
[root@rac12a etc]#

We can verify the ASMFD disks using the command

[root@rac12b ~]# asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASM_DATA                    ENABLED   /dev/sdc1
ASM_DIVERS                  ENABLED   /dev/sdd1
ASM_OCR1                    ENABLED   /dev/sdg1
ASM_OCR2                    ENABLED   /dev/sdi1
ASM_VOT1                    ENABLED   /dev/sde1
ASM_VOT2                    ENABLED   /dev/sdh1
ASM_VOT3                    ENABLED   /dev/sdf1
[root@rac12b ~]#

Let’s update the afd.conf so that ASMFD can mount ASMFD disks.

[root@rac12a etc]# cat afd.conf
afd_diskstring='/dev/sd*'
afd_filtering=enable

When the ASMFD disks are visible on both nodes, we can start acsfload on both nodes

[root@rac12a etc]# acfsload start
ACFS-9391: Checking for existing ADVM/ACFS installation.
ACFS-9392: Validating ADVM/ACFS installation files for operating system.
ACFS-9393: Verifying ASM Administrator setup.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9154: Loading 'oracleoks.ko' driver.
ACFS-9154: Loading 'oracleadvm.ko' driver.
ACFS-9154: Loading 'oracleacfs.ko' driver.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9322: completed
[root@rac12a etc]#

Now the conversion is done and we can start crs on both nodes

[root@rac12a ~]# crsctl start crs

[root@rac12b ~]# crsctl start crs

We can remove all asmlib references in the parameter diskstring

[grid@rac12a trace]$ asmcmd dsget
parameter:ORCL:*, AFD:*
profile:ORCL:*,AFD:*

[grid@rac12a trace]$ asmcmd dsset 'AFD:*'

[grid@rac12a trace]$ asmcmd dsget
parameter:AFD:*
profile:AFD:*
[grid@rac12a trace]$

Once the cluster started, we can verify the disk names

[grid@rac12a trace]$ asmcmd lsdsk
Path
AFD:ASM_DATA
AFD:ASM_DIVERS
AFD:ASM_OCR1
AFD:ASM_OCR2
AFD:ASM_VOT1
AFD:ASM_VOT2
AFD:ASM_VOT3
[grid@rac12a trace]$

We can also use following command to confirm that now ASMFD is being used

set linesize 300
col PATH for a20
set pages 20
col LIBRARY for a45
col NAME for a15
select inst_id,group_number grp_num,name,state,header_status header,mount_status mount,path, library
from gv$asm_disk order by inst_id,group_number,name;


   INST_ID    GRP_NUM NAME            STATE    HEADER       MOUNT   PATH                 LIBRARY
---------- ---------- --------------- -------- ------------ ------- -------------------- ---------------------------------------------
         1          1 ASM_DIVERS      NORMAL   MEMBER       CACHED  AFD:ASM_DIVERS       AFD Library - Generic , version 3 (KABI_V3)
         1          2 ASM_OCR1        NORMAL   MEMBER       CACHED  AFD:ASM_OCR1         AFD Library - Generic , version 3 (KABI_V3)
         1          2 ASM_OCR2        NORMAL   MEMBER       CACHED  AFD:ASM_OCR2         AFD Library - Generic , version 3 (KABI_V3)
         1          3 ASM_DATA        NORMAL   MEMBER       CACHED  AFD:ASM_DATA         AFD Library - Generic , version 3 (KABI_V3)
         1          4 ASM_VOT1        NORMAL   MEMBER       CACHED  AFD:ASM_VOT1         AFD Library - Generic , version 3 (KABI_V3)
         1          4 ASM_VOT2        NORMAL   MEMBER       CACHED  AFD:ASM_VOT2         AFD Library - Generic , version 3 (KABI_V3)
         1          4 ASM_VOT3        NORMAL   MEMBER       CACHED  AFD:ASM_VOT3         AFD Library - Generic , version 3 (KABI_V3)
         2          1 ASM_DIVERS      NORMAL   MEMBER       CACHED  AFD:ASM_DIVERS       AFD Library - Generic , version 3 (KABI_V3)
         2          2 ASM_OCR1        NORMAL   MEMBER       CACHED  AFD:ASM_OCR1         AFD Library - Generic , version 3 (KABI_V3)
         2          2 ASM_OCR2        NORMAL   MEMBER       CACHED  AFD:ASM_OCR2         AFD Library - Generic , version 3 (KABI_V3)
         2          3 ASM_DATA        NORMAL   MEMBER       CACHED  AFD:ASM_DATA         AFD Library - Generic , version 3 (KABI_V3)
         2          4 ASM_VOT1        NORMAL   MEMBER       CACHED  AFD:ASM_VOT1         AFD Library - Generic , version 3 (KABI_V3)
         2          4 ASM_VOT2        NORMAL   MEMBER       CACHED  AFD:ASM_VOT2         AFD Library - Generic , version 3 (KABI_V3)
         2          4 ASM_VOT3        NORMAL   MEMBER       CACHED  AFD:ASM_VOT3         AFD Library - Generic , version 3 (KABI_V3)

14 rows selected.
 

Cet article Migrating from ASMLIB to ASMFD est apparu en premier sur Blog dbi services.

Remote syslog from Linux and Solaris

Yann Neuhaus - Wed, 2018-06-20 10:47

Auditing operations with Oracle Database is very easy. The default configuration, where SYSDBA operations go to ‘audit_file_dest’ (the ‘adump’ directory) and other operations go to the database may be sufficient to log what is done but is definitely not a correct security audit method as both destinations can have their audit trail deleted by the DBA. If you want to secure your environment by auditing the most privileged accounts, you need to send the audit trail to another server.

This is easy as well and here is a short demo involving Linux and Solaris as the audited environments. I’ve created those 3 computer services in the Oracle Cloud:
CaptureSyslog000

So, I have an Ubuntu service where I’ll run the Oracle Database (XE 11g) and the hostname is ‘ubuntu’

root@ubuntu:~# grep PRETTY /etc/os-release
PRETTY_NAME="Ubuntu 16.04.4 LTS"

I have a Solaris service which will also run Oracle, and the hostname is ‘d17872′

root@d17872:~# cat /etc/release
Oracle Solaris 11.3 X86
Copyright (c) 1983, 2016, Oracle and/or its affiliates. All rights reserved.
Assembled 03 August 2016

I have an Oracle Enterprise Linux service which will be my audit server, collecting syslog messages from remote hosts, the hostname is ‘b5e501′ and the IP address in the PaaS network is 10.29.235.150

[root@b5e501 ~]# grep PRETTY /etc/os-release
PRETTY_NAME="Oracle Linux Server 7.5"

Testing local syslog

I start to ensure that syslog works correctly on my audit server:

[root@b5e501 ~]# jobs
[1]+ Running tail -f /var/log/messages &
[root@b5e501 ~]#
[root@b5e501 ~]# logger -p local1.info "hello from $HOSTNAME"
[root@b5e501 ~]# Jun 20 08:28:35 b5e501 bitnami: hello from b5e501

Remote setting

On the aduit server, I un-comment the lines about receiving syslog from TCP and UDP on port 514

[root@b5e501 ~]# grep -iE "TCP|UDP" /etc/rsyslog.conf
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
# Remote Logging (we use TCP for reliable delivery)

I restart syslog service

[root@b5e501 ~]# systemctl restart rsyslog
Jun 20 08:36:47 b5e501 systemd: Stopping System Logging Service...
Jun 20 08:36:47 b5e501 rsyslogd: [origin software="rsyslogd" swVersion="8.24.0" x-pid="2769" x-info="http://www.rsyslog.com"] exiting on signal 15.
Jun 20 08:36:47 b5e501 systemd: Starting System Logging Service...
Jun 20 08:36:47 b5e501 rsyslogd: [origin software="rsyslogd" swVersion="8.24.0" x-pid="2786" x-info="http://www.rsyslog.com"] start
Jun 20 08:36:47 b5e501 systemd: Started System Logging Service.

I tail the /var/log/messages (which is my default destination for “*.info;mail.none;authpriv.none;cron.none”)

[root@b5e501 ~]# tail -f /var/log/messages &
[root@b5e501 ~]# jobs
[1]+ Running tail -f /var/log/messages &

I test with local1.info and check that the message is tailed even when logger is sending it though the network:

[root@b5e501 ~]# logger -n localhost -P 514 -p local1.info "hello from $HOSTNAME"
Jun 20 09:18:07 localhost bitnami: hello from b5e501

That’s perfect.

Now I can test the same from my Ubuntu host to ensure that the firewall settings allow for TCP and UDP on port 514


root@ubuntu:/tmp/Disk1# logger --udp -n 10.29.235.150 -P 514 -p local1.info "hello from $HOSTNAME in UDP"
root@ubuntu:/tmp/Disk1# logger --tcp -n 10.29.235.150 -P 514 -p local1.info "hello from $HOSTNAME in TCP"

Here are the correct messages received:

Jun 20 09:24:46 ubuntu bitnami hello from ubuntu in UDP
Jun 20 09:24:54 ubuntu bitnami hello from ubuntu in TCP

Destination setting for the audit

As I don’t want to have all messages into /var/log/messages, I’m now setting, in the audit server, a dedicated file for “local1″ facility and “info” level that I’ll use for my Oracle Database audit destination

[root@b5e501 ~]# touch "/var/log/audit.log"
[root@b5e501 ~]# echo "local1.info /var/log/audit.log" >> /etc/rsyslog.conf
[root@b5e501 ~]# systemctl restart rsyslog

After testing the same two ‘logger’ commands from the remote host I check the entries in my new file:

[root@b5e501 ~]# cat /var/log/audit.log
Jun 20 09:55:09 ubuntu bitnami hello from ubuntu in UDP
Jun 20 09:55:16 ubuntu bitnami hello from ubuntu in TCP

Remote logging

Now that I validated that remote syslog is working, I set automatic forwarding of syslog messages on my Ubuntu box to send all ‘local1.info to the audit server':

root@ubuntu:/tmp/Disk1# echo "local1.info @10.29.235.150:514" >> /etc/rsyslog.conf
root@ubuntu:/tmp/Disk1# systemctl restart rsyslog

This, with a single ‘@’ forwards in UDP. You can double the ‘@’ to forward using TCP.

Here I check with logger in local (no mention of the syslog host here):

root@ubuntu:/tmp/Disk1# logger -p local1.info "hello from $HOSTNAME with forwarding"

and I verify that the message is logged in the audit server into /var/log/audit.log

[root@b5e501 ~]# tail -1 /var/log/audit.log
Jun 20 12:00:25 ubuntu bitnami: hello from ubuntu with forwarding

Repeated messages

Note that when testing, you may add “$(date)” to your message in order to see it immediately because syslog keeps the message to avoid flooding when the message is repeated. This:

root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Then another one"

is logged as this:

Jun 20 12:43:12 ubuntu bitnami: message repeated 5 times: [ Always the same message] Jun 20 12:43:29 ubuntu bitnami: Then another one

I hope that one day this idea will be implemented by Oracle when flooding messages to the alert.log ;)

Oracle Instance

The last step is to get my Oracle instance sending audit message to the local syslog, with facility.level local1.info so that they will be automatically forwarded to my audit server. I have to set audit_syslog_level to ‘local1.info’ and the audit_trail to ‘OS':

oracle@ubuntu:~$ sqlplus / as sysdba
 
SQL*Plus: Release 11.2.0.2.0 Production on Wed Jun 20 11:48:00 2018
 
Copyright (c) 1982, 2011, Oracle. All rights reserved.
 
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
 
SQL> alter system set audit_syslog_level='local1.info' scope=spfile;
 
System altered.
 
SQL> alter system set audit_trail='OS' scope=spfile;
 
System altered.
 
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
 
Total System Global Area 1068937216 bytes
Fixed Size 2233344 bytes
Variable Size 616565760 bytes
Database Buffers 444596224 bytes
Redo Buffers 5541888 bytes
Database mounted.
Database opened.

It is very easy to check that it works as the SYSDBA and the STARTUP are automatically audited. Here is what I can see in my audit server /var/log/audit.log:

[root@b5e501 ~]# tail -f /var/log/audit.log
Jun 20 11:55:47 ubuntu Oracle Audit[27066]: LENGTH : '155' ACTION :[7] 'STARTUP' DATABASE USER:[1] '/' PRIVILEGE :[4] 'NONE' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[13] 'Not Available' STATUS:[1] '0' DBID:[0] ''
Jun 20 11:55:47 ubuntu Oracle Audit[27239]: LENGTH : '148' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[5] 'pts/0' STATUS:[1] '0' DBID:[0] ''
Jun 20 11:55:51 ubuntu Oracle Audit[27419]: LENGTH : '159' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[5] 'pts/0' STATUS:[1] '0' DBID:[10] '2860420539'

In the database server, I have no more files in the adump since this startup:

oracle@ubuntu:~/admin/XE/adump$ /bin/ls -alrt
total 84
drwxr-x--- 6 oracle dba 4096 Jun 20 11:42 ..
-rw-r----- 1 oracle dba 699 Jun 20 11:44 xe_ora_26487_1.aud
-rw-r----- 1 oracle dba 694 Jun 20 11:44 xe_ora_26515_1.aud
-rw-r----- 1 oracle dba 694 Jun 20 11:44 xe_ora_26519_1.aud
-rw-r----- 1 oracle dba 694 Jun 20 11:44 xe_ora_26523_1.aud
drwxr-x--- 2 oracle dba 4096 Jun 20 11:48 .
-rw-r----- 1 oracle dba 896 Jun 20 11:48 xe_ora_26574_1.aud

Solaris

I have also started a Solaris service:

opc@d17872:~$ pfexec su -
Password: solaris_opc
su: Password for user 'root' has expired
New Password: Cl0udP01nts
Re-enter new Password: Cl0udP01nts
su: password successfully changed for root
Oracle Corporation SunOS 5.11 11.3 June 2017
You have new mail.
root@d17872:~#

Here, I add the forwarding to /etc/syslog.conf (tab is a required separator which cannot be replaced with spaces) and restart the syslog service:

root@d17872:~# echo "local1.info\t@10.29.235.150" >> /etc/syslog.conf
root@d17872:~# svcadm restart system-log

Then logging a message locally

root@d17872:~# logger -p local1.info "hello from $HOSTNAME with forwarding"

Here is the messaged received from the audit server:

[root@b5e501 ~]# tail -f /var/log/audit.log
Jun 20 05:27:51 d17872.compute-a511644.oraclecloud.internal opc: [ID 702911 local1.info] hello from d17872 with forwarding

Here in Solaris I have the old ‘syslog’ with no syntax to change the UDP port. The default port is defined in /etc/services, which is the one I’ve configured to listen to on my audit server:

root@d17872:~# grep 514 /etc/services
shell 514/tcp cmd # no passwords used
syslog 514/udp

If you want more features, you can install syslog-ng or rsyslog on Solaris.

 

Cet article Remote syslog from Linux and Solaris est apparu en premier sur Blog dbi services.

Bourbon Ibirapuera Streamlines Property Operations, Creates New Guest Experiences with Oracle Cloud

Oracle Press Releases - Wed, 2018-06-20 08:00
Press Release
Bourbon Ibirapuera Streamlines Property Operations, Creates New Guest Experiences with Oracle Cloud Brazilian Hotel Implements Integrated Suite of Hospitality, ERP and CX

Redwood Shores, Calif.—Jun 20, 2018

Oracle today announced that Bourbon Ibirapuera has selected a suite of Oracle solutions including OPERA and Simphony Cloud, Fusion ERP, OPERA Loyalty, OPERA OWS, Oracle Sales Cloud, Eloqua and Hyperion as part of an initiative to streamline operations across properties and arm hotel associates with new insights that inform personalized guest experiences. Bourbon Ibirapuera initially invested in OPERA Cloud, Simphony Cloud and ERP Cloud solutions before adding Oracle Sales Cloud and Eloqua products for a full suite of operations, back-office and customer facing technologies. Bourbon Ibirapuera will be the first hotel in Brazil to install these solutions and arm staff with cloud tools that enable deeper guest interaction and loyalty.

“Bourbon Ibirapuera’s choice of Oracle is a testament to the value that Oracle horizontal and vertical products bring to all segments of the hospitality market,” said Bernard Jammet, senior vice president, Oracle Hospitality. “With these new tools Bourbon Ibirapuera will be able to augment their guest experience and compete with larger chains and properties in region.”

“The hospitality industry as a whole spends too much time and effort managing multiple vendors and building integrations across solutions to maximize the value from IT investments,” said John Chen, CEO, Bourbon Ibirapuera. “After an initial investment in OPERA and Simphony Cloud we quickly realized the value of investing in a suite of solutions with existing integrations. With our new suite of solutions we are arming our business with deeper insights to empower informed management decisions, streamlining the reservation process and optimizing hotel operations in an integrated way.”

As a longtime customer of Oracle Hospitality, Bourbon Ibirapuera’s experience with OPERA delivering value for hospitality operations established Oracle as a clear front runner for the upgrade project. A phased integration approach, first focusing on the back end operational infrastructure before adding new marketing tools, allowed Bourbon Ibirapuera to effectively manage digital transformation and prepare staff for a cloud transition. Bourbon Ibirapuera’s implementation will also bring several new customer-facing features to the region including web and mobile check-in and more targeted incentives for new or repeat guests including personalized rates and promotions.

Bourbon Ibirapuera will bring these point of sale, online reservation, financial and budget planning and CRM and marketing tools online in June 2018 after a four month implementation process.

Contact Info
Matt Torres
Oracle
415-595-1584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com

About Oracle Hospitality

Oracle Hospitality brings 35 years of experience in providing technology solutions to food and beverage operators and hoteliers. We provide hardware, software, and services that allow our customers to deliver exceptional guest experiences while maximizing profitability. Our solutions include integrated point-of-sale, loyalty, reporting and analytics, inventory and labor management, all delivered from the cloud to lower IT cost and maximize business agility.

For more information about Oracle Hospitality, please visit www.oracle.com/Hospitality

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 415-595-1584

St. HOPE Accelerates Mission to Support Sacramento Youth

Oracle Press Releases - Wed, 2018-06-20 08:00
Press Release
St. HOPE Accelerates Mission to Support Sacramento Youth Nonprofit Founded by Former NBA All-Star and Sacramento Mayor Turns to NetSuite to Support Growth Beyond the Classroom

SAN MATEO, Calif.—Jun 20, 2018

 

St. HOPE, a nonprofit community development corporation, has leveraged Oracle NetSuite to support its mission to create one of the finest urban pre-kindergarten through 12th grade public school systems in the United States. With NetSuite SuiteSuccess for Nonprofits, St. HOPE has been able to streamline critical business functions in order to focus its time and resources on providing high quality public education and creating living-wage sustainable jobs.

Founded by former NBA All-Star and Sacramento mayor Kevin Johnson in 1989, St. HOPE began as a single, portable classroom that served as an afterschool program for Sacramento High School students. Today, it serves 1,800 students through five charter schools and manages residential properties as well as an Art and Cultural Center that includes a cafe, bookstore, barbershop, art gallery and 200-seat theater. The charter school network focuses on students from urban communities and aims to graduate self-motivated, industrious and critical-thinking leaders who are committed to serving others, passionate about lifelong learning and prepared to earn a degree from a four-year college.

“As we found success with the schools, the organization realized that we had an opportunity to do more in the community,” said Julian Love, chief financial officer, St. HOPE Community Development. “That meant we needed a system that could better track and manage finances. SuiteSuccess fit exactly what we needed.”

With the preconfigured roles, dashboards and nonprofit industry best practices within SuiteSuccess, St. HOPE has been able to shorten payroll processes by 87 percent, digitize and gain greater control over purchasing processes, and achieve real-time visibility into its financial performance. As a result, St. HOPE has been able to manage the increasing business complexity presented by its growth and expanding scope. St. HOPE selected NetSuite SuiteSuccess for Nonprofits in March 2017 and went live with a full-fledged Enterprise Resource Planning (ERP) system in less than three months.

“NetSuite has a proud history of helping organizations in the nonprofit sector,” said David Geilhufe, Senior Director, Social Impact & Nonprofit, Oracle NetSuite. “With SuiteSuccess, we’re able to help thriving organizations like St. HOPE to quickly and easily manage critical business functions so they can focus on their mission and on helping the community.”

Contact Info
Danielle Tarp
Oracle NetSuite Corporate Communications
650-506-2904
danielle.tarp@oracle.com
About St. HOPE

St. HOPE began in 1989 in a portable classroom at Sacramento High School as an after-school program named St. HOPE Academy. Founded by NBA All-Star and Oak Park native Kevin Johnson, St. HOPE is a nonprofit community development corporation whose mission is to revitalize the Oak Park community through public education, and economic development. Learn more at www.sthope.org.

About Oracle NetSuite

For more than 20 years, Oracle NetSuite has helped organizations grow, scale and adapt to change. NetSuite provides a suite of cloud-based applications, which includes financials/Enterprise Resource Planning (ERP), HR, professional services automation and omnichannel commerce, used by more than 40,000 organizations and subsidiaries in 199 countries and territories.

For more information, please visit http://www.netsuite.com.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

he Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Danielle Tarp

  • 650-506-2904

Oracle Utilities Ranks No. 1 in Home Energy Management

Oracle Press Releases - Wed, 2018-06-20 07:00
Press Release
Oracle Utilities Ranks No. 1 in Home Energy Management Navigant names Oracle the leader in customer engagement and demand side management technology

Redwood Shores, Calif.—Jun 20, 2018

Oracle, the largest provider of cloud technology for the utility industry once again earned the top spot in a Navigant Research Leaderboard report that ranks companies in the home energy management (HEM) space. Oracle Utilities, which acquired Opower in 2016, received the highest ranking in this 2018 report due to significant market penetration of its home energy management solution across 100 utilities and its ability to offer utility companies a comprehensive, end-to-end utility software solution at scale around the world.

“This is significant validation of our continued leadership and support of our utility customers,” said Rodger Smith, SVP and general manager of Oracle Utilities. “Since acquiring Opower we have continued to innovate in the rapidly evolving home energy management market to deliver the strongest results in the category. Our investment in scalable solutions that connect every customer enables tighter customer-to-grid integration for the utility of the future.”

Prior to joining Oracle, Opower has been consistently ranked as the top provider since Navigant introduced the HEM Leaderboard, due to its leading capabilities in this category including home energy reports, behavioral demand response, smart meter and rates engagement, billing insights and alerts and embeddable online tools.

“Home energy management (HEM) is a broad market of technologies and services that consumers use to better manage and control their home energy consumption and production. With the development of the smart home and connected devices, energy management has become a critical part of the digitization of the home. Oracle Utilities’ Opower solutions are at the forefront of monitoring energy usage, demand side management programs and increasing customer engagement to increase energy efficiency,” said Paige Leuschner, Research Analyst at Navigant.

The Navigant Research Leaderboard Report examines the strategy and execution of 14 companies that offer HEM software solutions and rates them on 10 criteria: vision, go-to-market strategy, partners, technology, geographic reach, sales and marketing, product performance, product portfolio and integrations, pricing, and staying power. Using Navigant Research’s proprietary Leaderboard methodology, vendors are profiled, rated, and ranked to provide an objective assessment of each company’s relative strengths and weaknesses in the global HEM market.

Contact Info
Valerie Beaudett
Oracle Corporation
+1 650.400.7833
valerie.beaudett@oracle.com
Wendy Wang
H&K Strategies
+1 979 216 8157
wendy.wang@hkstrategies.com
About Oracle Utilities

Oracle Utilities delivers business critical applications that help electric, gas and water utilities worldwide enhance customer experience, increase operational efficiency and achieve performance excellence. Our customer care and billing, network management, work and asset, field services, meter data management and analytics solutions integrate with Oracle’s leading enterprise applications, BI tools, middleware, database technologies, servers and storage. We are the largest provider of cloud services in the industry today, serving the entire utility value chain from the grid to the meter to end customers. Our software enables customers to adapt more nimbly to market deregulation, meet ever-evolving customer demands, and deliver on energy efficiency commitments. Find out how we can become your trusted advisor—visit www.oracle.com/utilities.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1 650.400.7833

Wendy Wang

  • +1 979 216 8157

how to reset a sequence

Tom Kyte - Wed, 2018-06-20 01:26
Create sequence with no options, and the current value of the sequence is 10. Specify the statements in order to reset the sequence to 8, so that the next value will be generated after 8 is 11. Find out logic.
Categories: DBA Blogs

Log of switchover/failover/open

Tom Kyte - Wed, 2018-06-20 01:26
What data dictionary view can be used to determine the number of times that a switchover/failover/open has occurred for a standby database?
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator