My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]
Posted by Pete On 23/07/14 At 08:44 PM
We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]
Posted by Pete On 25/06/14 At 09:41 AM
Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]
Posted by Pete On 17/04/14 At 03:56 PM
I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]
Posted by Pete On 05/03/14 At 10:17 AM
We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]
Posted by Pete On 29/10/13 At 01:05 PM
We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]
Posted by Pete On 18/10/13 At 02:36 PM
We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]
Posted by Pete On 04/09/13 At 02:45 PM
It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]
Posted by Pete On 28/08/13 At 05:04 PM
This is based on the presentation Juan Loaiza gave regarding What’s new with Exadata. While a large part of the presentation focussed on what was already available, there are quite a few interesting new features that are coming down the road.
First of was a brief mention of the hardware. I’m less excited about this. The X4 has plenty of the hardware that you could want: CPU, memory and flash. You’d expect some or all of them to be bumped in the next generation.
This was skated over fairly quickly, but I expect an Exadata X5 in a few months. The X4 was released back in December 2013, first X4 I saw was January 2014. I wouldn’t be surprised if Oracle release the X5 on or around the anniversary of that release.
Very little was said about the new hardware that would be in the X5 except that the development cycle has followed what intel has released, and that cpu cores have gone up and flash capacity has gone up. No word was said on what CPU is going to be used on the X5.
The compute nodes on an X4-2 have Intel E5-2697 v2 chips this is a 12 core chip running at 2.7GHz. I’d expect an increase in core count. The X3 to X4 transition increased core count by 50%. If that happens again, we get to 18 cores. There is an Intel E5-2699 v3 with 18 cores but that’s clocked at 2.3GHz.
However, I think I’d be less surprised if they went with E5-2697 v3 which is 14 core chip clocked at 2.6GHz. That would be a far more modest increase in the number of cores. The memory speed available with this chip does go up though – it’s DDR4. Might help with In Memory option. I also wonder if they’ll bump the amount of memory supported – this chip (like the predecessor) can go to 768GB.
As I said, it was not mentioned which chip was going to be used, only that Intel had released new chips and that Oracle would be qualifying their use for Exadata over the coming months.
New Exadata Software
There was a bunch of interesting sounding new features coming down the road. Some of the ones that in particular caught my eye were:
The marketing friendly term “Exafusion”. Exafusion seems to be about speeding up OLTP, labelled as “Hardware Optimized OLTP Messaging” it’s a reimplementation of cache fusion. Messages bypass network stack leading to a performance improvement.
Columnar Flash Cache – This is Exadata automatically reformatting HCC data when written to flash as a pure column store for analytic workloads. Dual formats are stored.
Database snapshots on Exadata. This seems designed with pluggable databases in mind for producing fast clones for dev/test environments. Clearly something that was a gap with ASM as used on exadata, but ACFS does snapshots.
Currently the latest Linux release available on Exadata is 5.10. Upgrading across major releases is not supported – would have required reimaging. Not a pretty prospect. Thankfully Oracle are going to allow and enable upgrading in place to 6.5.
Some talk about reducing I/O outliers both in reading from hdd and in writing to flash.
Currently with IORM you can only enable or disable access to flash for a particular database. Full IORM seems to be coming for flash.
Final new feature that caught my eye was the long rumoured Virtualisation coming to Exadata. OVM is coming. The ODA for example has had VM capability for some time, so it’s in some ways an obvious extension. I’m expecting with the increasing number of cores lots of smaller organisations may not actually need all those cores and might think even if they could turn unused ones off, it’s a waste buying that hardware and not being able to use it.
I’m hoping to NOT see OVM on an Exadata in the wild anytime soon.
Software on Silicon
One final point almost tucked out of site, was that Juan had a little bullet point about “software on silicon”. Now this has me confused. My understanding is that when Larry was talking about this, it was specifically SPARC. That I can understand as Oracle controls what goes on the chip.
Ignoring the SPRAC Supercluster, there is no SPRAC on Exadata. So that leaves a closer collaboration with Intel or moving to SPARC. Collaborating closer with Intel is a possibility and Oracle had first dibs on the E7-8895 v2 for the X4-8.
I can’t imagine changing the compute nodes to SPARC that wouldn’t make sense. But “software on silicon” is a bit like offloading…
Exadata software is definitely keeping moving forward and the difference between running Oracle on Exadata compared with non-exadata is growing ever wider with each “exadata only” feature.
- Any 1 or 2 eBooks/Videos -- $10 each
- Any 3-5 eBooks/Videos -- $8 each
- Any 6 or more eBooks/Videos -- $6 each
OTN's Tech Fest was AWESOME! Thanks for joining us! We had fun, and we hope you did, too!
The OTN team have been busy shooting video and attending sessions. See what they've been up to so far -
Follow @JavaOneConf for conference-specific announcements
Hacking sessions and interviews on NightHacktingTV - LIVE from the Java Hub, 9am-4pm PT.
Special Activity in the OTN lounge, Moscone South Upper Lobby on Tuesday, September 30th - OTN Wearable Meetup – 4 to 6pm - See live demos of Oracle ideation and proof of concept wearable technology. Show us your own wearables and discuss the finer points of use cases, APIs, integrations, UX design, and fashion and style considerations for wearable tech development, and lots more!
Landing on Sunday 28th, after a 13 hours' trip my colleague Franck Pachot and I had just the time to do the registration, go to the hotel, and go back to the "Welcome Reception" where we could eat something. After a night where I could feel the jet lag :-) we where ready to "participate" in this amazing event, the Oracle Open World 2014.
The first session I attended was the keynote where new challenges were exposed, "moving" old 20 years applications; building new infrastructures with less budget as the money is put more to the business applications to fullfill the user demands and expectations; Big Data where the analyzes but also the delivery of the results has to be fast. To resume we are in a period where the infrastructure is changing by using more and more the cloud but the approach to deal with the new challenges has also to be changed to integrate this new digital world.
Another interesting session was the one from Mr. William Lyons about the Oracle WebLogic server strategy and roadmap. He talked about the Cloud Application Foundation like mobile development productivity, Foundation for Fusion Middleware and Application, High Availability, performance, multi-tenancy, cloud management and operation aso. He first recapitulated the new features from WebLogic 12.1.2 like the management of Coherence, Oracle HTTP server, webtier using only one tool like WLS console, WLST or OFMW console. He also talked about the dabatase integration with GriLink, RAC, multi tenant database, application continuity and Database Resident Connection Pool which improves the performance.
He passed then to the new features from 12.1.3 which has been released in June 2014. This new version improves functionnalities in the Fusion MiddleWare, Mobile as well as in High Availability areas. The developper can now have a free development license, they can install the product by using a zip version which contains also the patches. WebLogic 12.1.3 supports Java EE7 AND 8.
The next release which is plan for 2015 is WebLogic 12.2.1. With this version the multitenancy concept is covered where domain partition can be used to isolate resources for the different tenants. Regarding Java it will be fully compliant with Java EE7 and 8.
In this first day lots of information have been ingested but they have to be digested in the next weeks :-)
Let's see what will happend in the next days!
Oracle Open World is not only conferences but also practice and networking. Today at the OTN lounge have installed the following demos on my laptop:
- a Dbvisit replicate #repattack with
- a Delphix cloning environement #cloneattack
I'll detail the former below and the latter tomorrow, but if you are at San Francisco and missed it, please come tomorrow to the same kind of session at the Oak Table World! You don't even need the OOW registration for that - it's independant but at the same place. Here are the details: http://www.oraclerealworld.com/oaktable-world/agenda/
This is the event:
Tweet:September 29, 2014
Well, actually I did install everything a bit earlier as I had the #repattack environement before and I woke up very early because of the jet lag... The installation is straightforward and I've monitored it with anoter tool which I like (and we are partner as well): Orachrome Lighty.
Tweet:September 29, 2014
The idea is to quickly setup a source and a target virtualbox, with Oracle XE and Swingbench on the source. And then setup the replication on it. It is really straightforward and shows that logical replication is not too complex to set. So the OTN lounge was to occasion to meet the Dbvisit team.
Here is the setup - I will continue tomorrow for cloning:
Tweet:September 30, 2014
Still, within the storm, I managed to attend some very interesting PeopleSoft sessions. I may discuss some other findings in future posts, but today I want to focus on the new Delivery Model.
This model, based in the PeopleSoft Update Manager, has been around since the release of PeopleSoft 9.2 applications. Initially, I had mixed thoughts on the approach. Naturally, it's great to be able to download a periodically updated Virtual Machine and be able to review the latest and greatest functionality. On the other side, the need of downloading the image files (which account for more than 30 Gb) to apply a single regulatory patch increases the time needed to apply single patches for which you had all the prerequisites.
However, Oracle has started to deliver important new functionalities in the latest update images (or plans to deliver some of them soon such as the first FluID applications in the HCM Update Image 9), so the benefit of the new delivery model becomes much more visible.
Oracle has announced that today there are no plans to deliver PeopleSoft 9.3. Is that bad news? Not necessarily. Actually, if new functionality keeps flowing through newly delivered update images, it becomes a significant improvement over having to perform an application upgrade every 4 or 5 years.
The new delivery model (a.k.a. Continuous Delivery Model) allows customers to pick new functionalities individually, without having to apply changes for other function points. This greatly simplifies the update process, reducing maintenance costs over the old application upgrade approach.
Oracle has recently release a white paper which I found very illustrating on the Continuous Delivery Model. On top of that, today the announcement was made that the Cumulative Feature Tool available at peoplesoftinfo.com will include the image numbers as releases in order to easily identify new functionalities.
Time will tell what the final innovation pace will be. Today, with FluID and self service application being delivered or planned frequently, the Continuous Delivery Model looks like a nice step forward.
As many will have no doubt heard, there’s a new vulnerability that has been spotted, and there are already exploits for it in the wild.
The vulnerable systems are those running Bash – so Windows machines are safe, it’s just Unix/Linux and MacOSX.
Security Researcher Kasper Lindegaard from Secunia rates this as a bigger issue than the Heartbleed exploit discovered in April this year. “Heartbleed only enabled hackers to extract information, Bash enables hackers to execute commands to take over your servers and systems.”
The US government has rated this 10 out of 10 from severity point of view.
Oracle have been quick to react to this threat, and have issued a security alert here. It includes this chilling text:
This vulnerability may be remotely exploitable without authentication, i.e. it may be exploited over a network without the need for a username and password. A remote user can exploit this vulnerability to execute arbitrary code on systems that are running affected versions of Bash.
The multitenant architecture Needs enterprise edition and the multitenant option. Consists of a CDB (Container database) and zero, one or up to 252 PDBs (pluggable databases). Has a root container (the CDB itself) and a seed container (template to create PDBs) There is only one instance per CDB. A PDB doesn’t have : background processes [...]
The post OCP 12C – Basics of Multitenant Container Database (CDB) appeared first on Oracle DBA Scripts and Articles (Montreal).
Oracle Security Alert for CVE-2014-7169
Security Alert CVE-2014-7169 addresses a publicly disclosed vulnerability affecting GNU Bash. GNU Bash is a popular open source command line shell incorporated into Linux and other widely used operating systems. This vulnerability affects multiple Oracle products. This vulnerability may be remotely exploitable without authentication, i.e. it may be exploited over a network without the need for a username and password. A remote user can exploit this vulnerability to execute arbitrary code on systems that are running affected versions of Bash.
Oracle is still investigating this issue and will provide fixes for affected products as soon as they have been fully tested and determined to provide effective mitigation against the vulnerability.
The fixes that are available for immediate application by customers are listed in the Patch Availability Table. This Security Alert will be updated when fixes are available for additional affected Oracle products without sending additional emails to customers. Customers should check this page for updates.
Due to the severity, public disclosure, and reports of active exploitation of CVE-2014-7169, Oracle strongly recommends that customers apply the fixes provided by this Security Alert as soon as they are released by Oracle.
If the Oracle Cloud is so wonderful, why haven’t all of Oracle customers moved to it already?
Great, great question. Goes straight to the heart of one of Oracle’s primary messages. The answer played out as something close to what follows:
1. The cloud - services model is still relatively immature within the Oracle ecosystem. Some elements of Oracle’s pricing and execution in the services model are still being worked out. And that will take some time, mostly because human beings typically don’t change behavior at the drop of a hat…regardless of where they work. It’s still a work in progress, so many customers are taking a “wait and see” approach while things work themselves out.
2. Services revenue, while growing, only constitutes about five percent of Oracle’s revenue at the moment. Cloud services are still a relatively new thing in the Oracle world. Not every customer is ready to be on the leading edge, especially in light of their own corporate culture.
3. It’s tough to move customizations to the cloud. There’s no secret sauce to make it easy. Some heavily-customized customers have many customization to reconsider before they’ll be ready to take advantage of cloud services. The same could be said for data - many customers have significant data clean-up efforts required to be cloud-ready. Again, there’s no secret sauce for this.
4. Lack of control, sometime expressed as a concern over data security. In a public cloud in particular, a customer’s servers are no longer under their control. Ditto for data storage. While that makes some customers nervous, I’d suggest those concerns be balanced by two thoughts: A) Oracle is probably better at protecting your data than you are. Protecting data is part of their core business. Most Oracle customers do not generate revenue or profits by protecting data; B) Citing Oracle’s Thomas Kurian: “most customers would rather use enterprise applications than run enterprise applications.” Moving to the new model requires customers to let go of running the applications - for most customers, the economics alone make that a good thing.
It’s a funny thing. Cloud services offer some pretty significant benefits: relief from the maintenance associated with running enterprise applications, the capability to be more agile in development, the flexibility to quickly scale up and down as computing requirements change. Lots of benefits available in cloud application services. What’s holding customers back from getting those benefits for themselves comes down to two overarching theme: 1) challenges in their own mindset or corporate culture; 2) the state of their data or architecture. That seems to be it, unless I’m missing something. And, if I am, you can tell me in the comments.