Skip navigation.

Feed aggregator

Top 10 Book Meme

Greg Pavlik - 1 hour 7 min ago
What books have most impacted me? I picked books I have returned to over and over. 
Yes, I know this is solipsistic to publish, but its a fascinating thing to think through. I'm sure the list will not look right in a few months anyway. But here I go...

1 The Brothers Karamazov, Fyodor Dostoyevsky

Reason: the best book ever written. Duh.

2 Iob, LXX

Reason: bad things happen to good people, quite often.

3 I Am a Cat, Natsume Soseki

Reason: comedy is good for the soul. This is the funniest book I've ever read.

4 The Symposium, Plato

Reason: love. And I'm an only partially reconstructed platonist.

5 Demons, Fyodor Dostoyevsky

Reason: explains a big part of the 20th century. Makes 1984 look like crude propaganda.
6 Also Spracht Zarathustra, Friedrich Nietzsche
Reason: Nietszche saw the enormity of the modern project clearly.
7 Life of Moses, Gregory of Nyssa
Reason: before structuralism, post structuralism, semiotics, and deconstruction, there was Gregory of Nyssa. And apokatastasis.
8 For the Time Being, WH Auden
Reason: aside from the fact that Auden is the best English language poet, this is a deeply moving meditation on Christmas in the anglophone experience. Read it several times each winter.

9 The Divine Comedy, Dante Alighieri
Reason: great art meets allegory meets beauty. Honestly, stuck with only one book this might be it.
10 Faust, Johann Wolfgang von Goethe
Reason: we are all compromised to one degree or another.

Introducing Oracle BPM Suite 12c: Intelligent Adaptive Processes

WebCenter Team - 5 hours 40 min ago


In June 2014, Oracle launched Oracle Business Process Management Suite 12c (Oracle BPM Suite 12c), the industry's most social, unified, and complete BPM solution that helps organizations plan and manage their entire business architecture.

The latest release comes as demand for fast, interactive information increases, which in turn is driving ever-greater demand for more agile, responsive IT organizations.

In response, Oracle BPM Suite 12c enables businesses to quickly adapt to disruptive technologies such as cloud computing, big data, social networking, and smart devices, which have revolutionized how consumers and employees access and consume information.

Responsiveness Is the Lynchpin
"The world is a different place than it was just a few short years ago," says Oracle Director of Product Marketing Dain Hansen. "Operating in a digital economy means that companies must rethink how to do business."

Responsiveness has become the lynchpin for competitive advantage. That includes meeting customer concerns rapidly, providing real-time visibility into the enterprise to remove internal roadblocks and increase agility—and the ongoing ability to adapt to ever-evolving technologies and business practices.

Adaptive Management for Unstructured Content and Processes
Ideally, employees should have a real-time view into accounts payable and other processes that involve unstructured content and/or processes. Only then can they track how long processes take, spot bottlenecks, fix problems—and manage all the associated liabilities.

Thanks to tighter integration between Oracle BPM Suite 12c and Oracle WebCenter, a new adaptive case management feature delivers better, more-flexible case management for these traditionally thorny business processes.

New in Oracle BPM Suite 12c
In addition to adaptive case management, Oracle BPM Suite 12c provides the following new features to speed time-to-value, whether implementing new business processes or enhancing existing ones—across both in-house and cloud applications and services.
  • Improved business activity monitoring and reporting. Tightly integrated with Oracle Business Intelligence tools and Oracle Coherence, this feature delivers rich analytics tools, out-of-the-box process dashboards, out-of-the-box analysis tools, time-based analysis, and trend analysis for key performance indicators (KPIs). These enhancements, combined with the embedded complex query processor, enable organizations to react more quickly than ever to potential or actual performance issues as well as emerging trends and risks.
  • Business architecture modeling. Users can capture and catalog business processes from beginning to end, including strategic elements such as goals, objectives, and strategies—then link them to the value chains and business processes that implement them. And KPIs and other measures can be rolled back up to the goals and objectives within the business architecture for faster reporting and adjusting.
  • Mobile and software-as-a-service (SaaS) integrations. With Oracle BPM Suite 12c, users can easily and securely connect business processes to SaaS and mobile applications. And with Oracle Cloud Adapters, users can integrate with a range of Oracle and third-party cloud applications. In addition, Oracle BPM Suite 12c now supports REST/JSON for integrating mobile-aware process applications.

Read the Oracle Business Process Management Suite12c press release.

Watch a special online event introducing Oracle BPM Suite 12c and featuring General Electric, Vertafore, San Joaquin County, and AVIO Consulting.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

Introducing Oracle BPM Suite 12c

Date: Wednesday, September 17, 2014

Time: 10:00 AM PDT


ssl version

Laurent Schneider - 6 hours 55 min ago

I wrote about ssl version in jdbc thin yesterday

The default version also no longer works for the thick client with 12c client and 11g Server.

With 11gR2 :


C:> tnsping (DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=SRV01)(PORT=1521)))
TNS Ping Utility for 64-bit Windows: Version 11.2.0.4.0
OK (100 msec)

with 12cR1 :


C:> tnsping (DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=SRV01)(PORT=1521)))
TNS Ping Utility for 64-bit Windows: Version 12.1.0.1.0
TNS-12560: TNS:protocol adapter error

in trace file I see


ntzgsvp: no SSL version specified - using default version 0
ntzdosecneg: SSL handshake failed with error 29048.
ntzCreateConnection: returning NZ error 29048 in result structure
ntzCreateConnection: failed with error 542
nserror: nsres: id=0, op=65, ns=12560, ns2=0; nt[0]=29048, nt[1]=542, nt[2]=0; ora[0]=29048, ora[1]=0, ora[2]=0

I could not see this as a documented change yet, but if you force ssl_version to be 3.0, both client versions works


C:> tnsping (DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=SRV01)(PORT=1521)))
TNS Ping Utility for 64-bit Windows: Version 12.1.0.1.0
OK (80 msec)
C:> find "version" tnsping.trc sqlnet.ora

---------- TNSPING.TRC
nlstddp_dump_ptable:   ssl_version = 3.0
ntzGetStringParameter: found value for "ssl_version" configuration parameter: "3.0"

---------- SQLNET.ORA
ssl_version=3.0

SQLDeveloper and SSH Tunnels

John Scott - 8 hours 16 min ago

I saw @krisrice post this

SQL Developer 4.0.3 is out. Try the new SSH Tunneling connections. http://t.co/cFc7L5bjji

— krisrice (@krisrice) September 15, 2014

I’ve been using SQLDeveloper for a long time now and have always setup my SSH Tunnels separately (using tools like Meerkat). So I thought I’d take a look at how easy it is to use SSH Tunnels natively with SQLDeveloper.

Turns out it’s incredible simple to configure, simply go to your database connection properties

2014-09-16_10-13-08

Notice here that my connection is to localhost (since I’ll be using an SSH tunnel to connect to the remote machine), you could alternatively use a bastion host.

Click on the Advanced button and then select the SSH tab

All you need to do is specify the hostname you want to connect to along with the username. You could specify a password when you connect, but I’m using a keyfile so that I can essentially have passwordless login.

That’s it! Now when I try to connect to the remote connection, the SSH Tunnel kicks in and I don’t need to worry about setting up an SSH Tunnel externally to SQLDeveloper. Another nice side-effect of this is if I export the connection, I can share it with someone else and don’t need to worry about telling them to setup an SSH Tunnel as that information is also embedded in the connection export file (although obviously they’d need either the password or the Key File).

Perhaps not a marquee new feature in SQLDeveloper but a very welcome one!

Thinking about downgrading from Oracle Enterprise to Standard Edition?

Yann Neuhaus - 10 hours 54 min ago

You are using an Oracle Enterprise Edition and thinking about downgrading to the Standard Edition? In this case, you must be sure that your applications are compatible. It's not something easy to check. Here are a few ideas.

 

Why?

Why do you want to downgrade to the Standard Edition? For licensing costs, of course. Today, it is difficult to find a server with only a few cores. And Oracle Enterprise Edition is licenced per number of cores which are physically in the machine. You change your hardware and you will find that you cannot have a machine with the same number of cores. Even if the performance is fine, you will need to buy more software licenses because of those new multicore processors.

Another reason is virtualization. You want to consolidate your servers, but you don't want to pay database software licenses for all your datacenter capacity.

So the Standard Edition is a good alternative: besides the fact that they are chaper, the licenses are counted per socket and not per core.

Oracle Standard Edition doesn't have all features, but you can accept that. The reduction in the cost of licenses can compensate several days of development, tuning or administration, as well as the acquisition of third party tools to compensate what is missing on SE (for example dbvisit standby for high availability).

But you need to identify those features that you are using and that come with Enterprise Edition only

 

1. Read feature availability

The features available only in Enterprise Edition are listed in the documentation which shows which ones are available in Standard Edition or Enterprise Edition.

So the first thing to be done is to read that list and mark what you know you are using.

But there are two problems:

  • It's sometimes difficult to understand. For example, do you see clearly that you can't send e-mails for Enterprise Manager notifications when you don't have diagnostic Pack?
  • You probably don't know all what you (or your developers, your application) use.

 

2. Query feature usage

Oracle comes with a nice view about feature usage. DBA_FEATURE_USAGE_STATISTICS. It's nice because you have information about what you used, with comments, dates, etc. And it's also exposed in Cloud Control.

But did you ever try to match that with the documentation from the link above? That's difficult:

  • some Enterprise Edition features are not checked. For example, the usage of materialized view is shown, but without the distinction about those using query rewrite (which is an EE feature)
  • some subset of features triggers usage even when they should not (for example the Locator part of Spatial do not need Spatial option)

 

3. Import to standard

One important thing to do is to import into a Standard Edition and check what fails with an 'ORA-00439: feature not enabled' error. Because what is nice is that when you install Standard Edition the features not available are supposed to be disabled at link time.

One tip: you probably need to import metadata only so you want to import it in a small database. But when you do that you will see that your datafiles are increasing because of the initial segment size. This is because the 'deferred segment creation' is an Enterprise Edition feature. So the tip is:

 

impdp ... content=metadata_only transform="storage:n"

 

The big advantage when testing the import is that you are already testing the migration procedure, because it's the only way to migrate from Enterprise Edition to Standard Edition.

The problem is that it warns you only about static feature - those in your data model. Not about the usage. For example you will know that you can't create bitmap indexes. But you will not know that you will not be able to do bitmap plan conversion from regular indexes.

Testing the import guarantees that the migration can be done, but you should test the application on a SE database with data in order to validate usage and performance.

 

4. Try and test

After having checked everything, from the obvious which is documented, to the little things we know by experience, I usually advise the customer to test. Install a test database in Standard Edition. Test the application, test the monitoring, test the administration procedures (no online operation, no flashback database,...). If you plan to migrate with minimum downtime with a replication tool (such as dbvisit replicate) you can start to replicate to a Standard Edition database. Then you will be able to test the read-only use cases, such as reporting, which may suffer from the lack of some optimizer features (adaptive plans, result cache,...)

 

5. Decide

Then you will decide if you are ready to downgrade to Oracle Standard Edition. Of course, it will no be transparent. You will have to find some workarounds. The decision is just a balance between the cost reduction and the time you can spend to do manually what was automatic in EE.

Book Review: Expert Consolidation in Oracle Database 12c

Doug Burns - 11 hours 9 min ago

It's fair to declare up front that the author of this book, Martin Bach, is one of my best friends in the Oracle community (even though I haven't known him for that long really) and someone who I truly respect for all sorts of reasons, both professionally and personally. One of the personality traits I think we share is that, if we thought something was truly awful, we might not call it out as such but we would certainly raise an eyebrow, chuckle internally and then have a private conversation later about just how awful it was!

So if this book was awful, I would probably just pretend that I hadn't read it and certainly wouldn't be writing about it! He *might* remain a friend but I might have to look for new friends who didn't write awful books ;-) (I wouldn't tell him that, of course!)

Fortunately, it is far from awful.

Although Martin has previously written great content on the second edition of the RAC book (in fact I think one of our earliest contacts was a quick informal review of the ASM chapter) this really reads like *his* book and his voice runs straight through the middle (and sides) of it, particularly in the the earlier chapters - 1, 3 and 4. Maybe I should actually talk about what I mean by that and what the book is like instead of sucking up to Martin for a moment?

- It reflects what is truly going on in the day-to-day world of Enterprise Database Computing. Which is to say that it reflects business realities of cost savings, increased consolidation and balancing technical possibilities with practical realities. This is important. I remember when I first started out in IT, I think I was technically good, understood computing subjects and could pick up new ones pretty quickly, helped by my prior career as an assembly language developer and yet I didn't have the first idea of what happened in the IT departments of the big companies because I'd never worked in one. It was very frustrating and took me a long time and a bit of luck to get the right job and learn it as I went. If you've never worked in a big bank, for example, Martin explains what is actually going on in those banks right now because he's been there, seen it and done it. (Or at least attempted to do it whilst fighting the usual internal bureaucracy!) If I'd had this book at the start of my career, I would have understood what banks were looking for and the language they were speaking!

- It's current. Although this won't remain true for some sections of the book (things change so fast in Oracle-land), the coverage of 12c is likely to ensure that in practical terms, it will be several years before even the majority of the technical material is out of date, regardless of what Oracle Corp might announce at the next Openworld conference! Enterprises do move forward constantly to maintain continued support (even when they have current versions that just work), but they don't tend to wrap themselves up in new versions of Oracle for a good couple of years.

- It's accomplished and knowledgable without being smug or arrogant. This is a really important part of any book for me and one that I rarely find in the right balance. I want to read books by those who truly know and understand the subject matter, so I'm likely to gravitate towards books written by experts, but the purpose of the book should be to educate me and not to show off the authors knowledge as some precious gift from a far-flung planet! I wrote about this many years ago and the mere tone of some technical books is enough to make me stop reading. Martin doesn't tell you things because he's clever and you're not. He tells you them to help you do your job and to make your life easier by not having to go through the same pains that he probably did at first. But that doesn't mean the book is fluffy and lightweight either!

- As most authors will tell you, the quality of any technical book is largely dependent on the technical reviewer(s) because it's very difficult to spot one's own mistakes at times and I think that any treatment of technical matters benefits from a second opinion. Frits Hoogland might also be a friend but I also know he has the eye for detail and commitment to technical accuracy a book like this demands. I don't think he had much work to do on the descriptive passages (I happen to recognise Martin's stellar command of the English language and style) but I bet he pulled him up on the odd mistake or two ;-)

As seems increasingly the way with anything I write purporting to be a book review, I don't seem to have mentioned much about what is in the book and what it will teach you, but I suppose it's not exactly difficult for you to check those things yourself. When someone writes a book that I enjoyed reading and found myself nodding my head to as much as this one, I think it's worth my time nudging people in the right direction but letting them decide for themselves if it's the right book for them. Recommended, though!


P.S. I should also point out that I have not worked through the examples in the book because I'm in the process of changing my home setup but, when I do, this is the book I'll be turning to as I work my way through 12c installations.

Setting up Xubuntu in Lenovo Flex2 14D

Vattekkat Babu - 11 hours 25 min ago

Lenovo Flex2 14D is a good laptop with decent build quality, light weight, 14" screen and touch screen for those who like it. With AMD A6 processor version, it is reasonably priced too.

It comes pre-loaded wth Windows 8.1 and a bunch of Lenovo software. If you want to get this to dual boot with Ubuntu Linux, here are the specific fixes you need to do.

XML Parsing Error: no element found Location: ...

Denes Kubicek - 11 hours 48 min ago
I am just working on an application which is supposed to be exposed to public users. I created a simple ajax process there which gets a value of an application element and does something based on the value. While calling the on demand process I am getting this error while not authenticated:

XML Parsing Error: no element found Location: moz-nullprincipal:{6492ed13-2b81-4989-938c-7c19121f419a} Line Number 1, Column 1:



After searching for this error in combination with APEX I found my old thread from 2012 which basically wasn't answered:

https://community.oracle.com/thread/2444115?start=0&tstart=0

After investigating what is happening there I noticed that the on demand process I created is set to "must not be public user" in the authorization section. Obviously APEX does that by default for the on demand processes. Running the ajax call would not find the "element" which is the actual process.

Categories: Development

<span style="font-size: large;"

Oracle Infogram - Mon, 2014-09-15 13:25
Contributions by Angela Golla, Infogram Deputy-Editor


Mark Hurd’s Latest Blog: Why IT Complexity Kills Business Innovation
 In this new blog, Mark writes, “Information Technology today touches every facet of business in ways that traditional enterprise technology never did. As a result, chief executives are demanding a lot more from IT than they’ve been getting. CEOs need IT to help lead the way to transformational change, to help keep up with customer demands for faster, more responsive engagement, to help gain market share, and to help prevent disruption by more nimble competitors.”
Read the blog to find out why Mark says this is where cloud, big data, and engineered systems come into play.

LMS and Open: The false binary is based on past, not future markets

Michael Feldstein - Mon, 2014-09-15 12:46

D’Arcy Norman has an excellent blog post up titled “On the false binary of LMS vs. Open” that captures a false framing issue.

We’re pushed into a false binary position – either you’re on the side of the evil LMS, working to destroy all that is beautiful and good, or you’re on the side of openness, love, and awesomeness. Choose. There is no possible way to teach (or learn) effectively in an LMS! It is EVIL and must be rooted out before it sinks its rotting tendrils into the unsuspecting students who are completely and utterly defenseless against its unnatural power!

While D’Arcy is a proponent of open tools, he rightly calls out the need to understand institutional responsibilities.

But. We can’t just abdicate the responsibility of the institution to provide the facilities that are needed to support the activities of the instructors and students. That doesn’t mean just “hey – there’s the internet. go to it.” It means providing ways for students to register in courses. For their enrolment to be automatically processed to provision access to resources (physical classrooms, online environments, libraries, etc…). For students’ grades and records to be automatically pushed back into the Registrar’s database so they can get credit for completing the course. For integration with library systems, to grant acccess to online reserve reading materials and other resources needed as part of the course.

This is an important point, in that the institutional LMS is important and will not, and should not, go away anytime soon. I have pointed out recently that the LMS is one of the very few technologies now used in a majority of courses within an institution, and the institutional responsibility described above helping to explain why.

In our consulting work Michael and I often help survey institutions to discover what technologies are being used within courses, and typically the only technologies that are used by a majority of faculty members or in a majority of courses are the following:

  • AV presentation in the classroom;
  • PowerPoint usage in the classroom (obviously connected with the projectors);
  • Learning Management Systems (LMS);
  • Digital content at lower level than a full textbook (through open Internet, library, publishers, other faculty, or OER); and
  • File sharing applications.

At the same time, the LMS does a very poor job at providing a lot of the learning technologies desired by faculty and students. There is no way that a monolithic LMS can keep up with the market – it cannot match functionality of open internet tools especially without adding feature bloat.

I would add that part of the cause of the “false binary position” that D’Arcy points out is that much of the public commentary focuses on where the LMS has been rather than where it is going. There is a significant movement based on interoperability that is leading, perhaps painfully and slowly, to a world where the LMS can coexist with open educational tools, with even end users (faculty and students) eventually having the ability to select their tools that can share rosters and data with the institutional LMS.

Below is a modified presentation I gave at the Apereo Mexico conference in the spring (with a few changes to explain slides without audio). The key point is that there are subtle changes to the LMS market that are significant, and the coexistence of the LMS with open tools will be central to the market’s future.

Will all LMS vendors move this direction? In marketing, yes, but in reality, no. There are different approaches to this coexistence issue from the LMS vendors, ranging from lip service to outright support, and several points in between. But the overall trend is clearing moving this direction, even if some solutions lose out over time.

Download (PDF, 4.21MB)

The post LMS and Open: The false binary is based on past, not future markets appeared first on e-Literate.

Misconceptions about privacy and surveillance

DBMS2 - Mon, 2014-09-15 11:07

Everybody is confused about privacy and surveillance. So I’m renewing my efforts to consciousness-raise within the tech community. For if we don’t figure out and explain the issues clearly enough, there isn’t a snowball’s chance in Hades our lawmakers will get it right without us.

How bad is the confusion? Well, even Edward Snowden is getting it wrong. A Wired interview with Snowden says:

“If somebody’s really watching me, they’ve got a team of guys whose job is just to hack me,” he says. “I don’t think they’ve geolocated me, but they almost certainly monitor who I’m talking to online. Even if they don’t know what you’re saying, because it’s encrypted, they can still get a lot from who you’re talking to and when you’re talking to them.”

That is surely correct. But the same article also says:

“We have the means and we have the technology to end mass surveillance without any legislative action at all, without any policy changes.” The answer, he says, is robust encryption. “By basically adopting changes like making encryption a universal standard—where all communications are encrypted by default—we can end mass surveillance not just in the United States but around the world.”

That is false, for a myriad of reasons, and indeed is contradicted by the first excerpt I cited.

What privacy/surveillance commentators evidently keep forgetting is:

  • There are many kinds of privacy-destroying information. I think people frequently overlook just how many kinds there are.
  • Many kinds of organization capture that information, can share it with each other, and gain benefits from eroding or destroying privacy. Similarly, I think people overlook just how pervasive the incentive is to snoop.
  • Privacy is invaded through a variety of analytic techniques applied to that information.

So closing down a few vectors of privacy attack doesn’t solve the underlying problem at all.

Worst of all, commentators forget that the correct metric for danger is not just harmful information use, but chilling effects on the exercise of ordinary liberties. But in the interest of space, I won’t reiterate that argument in this post.

Perhaps I can refresh your memory why each of those bulleted claims is correct. Major categories of privacy-destroying information (raw or derived) include:

  • The actual content of your communications – phone calls, email, social media posts and more.
  • The metadata of your communications — who you communicate with, when, how long, etc.
  • What you read, watch, surf to or otherwise pay attention to.
  • Your purchases, sales and other transactions.
  • Video images, via stationary cameras, license plate readers in police cars, drones or just ordinary consumer photography.
  • Monitoring via the devices you carry, such as phones or medical monitors.
  • Your health and physical state, via those devices, but also inferred from, for example, your transactions or search engine entries.
  • Your state of mind, which can be inferred to various extents from almost any of the other information areas.
  • Your location and movements, ditto. Insurance companies also want to put monitors in cars to track your driving behavior in detail.

Of course, these categories overlap. For example, information about your movements can be derived not just from your mobile phone, but also from your transactions, from surveillance cameras, and from the health-monitoring devices that are likely to become much more pervasive in the future.

So who has reason to invade your privacy? Unfortunately, the answer boils down to “just about everybody”. In particular:

  • Any internet or telecom business would like to know, in great detail, what you are doing with their offerings, along with any other information that might influence what you’re apt to buy or do next.
  • Anybody who markets or sells to consumers wants to know similar things.
  • Similar things are true of anybody who worries about credit or insurance risk.
  • Anybody who worries about fraud wants to know who you’re connected to, and also wants to match you against any known patterns of fraud-related behavior.
  • Anybody who hires employees wants to know who might be likely to work hard, get sick or quit.
  • Similarly, they’d like to know who does or might engage in employee misconduct.
  • Medical researchers and caregivers have some of the most admirable reasons for wanting to violate privacy.

And that’s even without mentioning the most obvious suspects — law enforcement and national security of many kinds, who can be presumed to in at least certain cases be able to get any information that’s available to any other organization.

Finally, my sense is:

  • People appreciate the potential of fancy-schmantzy language and image recognition.
  • The graph analysis done on telecom metadata is so simple that people generally “get” what’s going on.
  • Despite all the “big data analytics” hype, commentators tend to forget just how powerful machine learning/predictive analytics privacy intrusions could be. Those psychographic clustering techniques devised to support advertising and personalization could be applied in much more sinister ways as well.

Related links

Categories: Other

Come to Oracle Open World

PeopleSoft Technology Blog - Mon, 2014-09-15 10:26
Hope most of our readers will be coming to Open World this year.  We have so much great stuff to share this year, we want everyone to see it first-hand.  We also enjoy meeting many of our customers in person, and Open World is a fine opportunity to discuss a variety of topics with you.  Here is a nice promotional video from the PeopleSoft YouTube channel to give you some of the highlights.

Benchmark: TokuDB vs. MariaDB / MySQL InnoDB Compression

Pythian Group - Mon, 2014-09-15 09:55

As the amount of data companies are interested in collecting grows, life becomes all the more difficult for IT staff at all levels within an organization. SAS Enterprise storage devices that were once considered giants are now being phased out in favor of SSD Arrays with features such as de-duplication, tape storage has pretty much been abandoned and the same goes without saying for database engines.

For many customers just storing data is not enough because of the CAPEX and OPEX that is involved, smarter ways of storing the same data are required and since databases generally account for the greatest portion of storage requirements across an application stack. Lately they are used not only for storing data but also for storing logs in many cases. IT managers, developers and system administrators very often turn to the DBA and pose the time old question “is there a way we can cut down on the space the database is taking up?” and this question seems to be asked all the more frequently as time goes by.

This is a dilemma that cannot easily be solved for a MySQL DBA. What would the best way to resolve this issue be? Should I cut down on binary logging? Hmm… I need the binary logs in case I need to track down the transactions that have been executed and perform point in time recovery. Perhaps I should have a look at archiving data to disk and then compress this using tar and gzip? Heck if I do that I’ll have to manage and track multiple files and perform countless imports to re-generate the dataset when a report is needed from historical data. Maybe, just maybe, I should look into compressing the data files? This seems like a good idea… that way I can keep all my data, and I can just take advantage of a few extra CPU cycles to keep my data to a reasonable size – or does it?

Inspired by the time old dilemma I decided to take the latest version of TokuDB for test run and compare it to InnoDB compression which has been around a while. Both technologies promise a great reduction in disk usage and even performance benefits – naturally if data resides on a smaller portion of the disk access time and seek time will decrease, however this isn’t applicable to SSD disks that are generally used in the industry today. So I put together a test system using an HP Gen8 Proliant Server with 4x Intel® Xeon® E3 Processors, 4GB ECC RAM & the Samsung EVO SATA III SSD rated at 6G/s and installed the latest version of Ubuntu 14.04 to run some benchmarks. I used the standard innodb-heavy configuration from the support-files directory adding one change – innodb_file_per_table = ON. The reason for this is that TokuDB will not compress the shared tablespace hence this would affect the results of the benchmarks. To be objective I ran the benchmarks both on MySQL and MariaDB using 5.5.38 which is the latest bundled version for TokuDB.

The databases were benchmarked for speed and also for the space consumed by the tpcc-mysql dataset generated with 20 warehouses. So lets first have a look at how much space was needed by TokuDB vs. InnoDB (using both compressed and uncompressed tables):

 

Configuration GB TokuDB  2,7 InnoDB Compressed Tables  4,2 InnoDB Regular Tables  4,8

 

TokuDB was a clear winner here, of course the space savings depend on the type of data stored in the database however with the same dataset it seems TokuDB is in the lead. Seeing such a gain in storage requirements of course will make you wonder how much overhead is incurred in reading and writing this data, so lets have a look at the “tpm-C” to understand how many orders can be processed per minute on each. Here I have also included results for MariaDB vs. MySQL. The first graph shows the amount of orders that were processed per 10 second interval and the second graph shows the total “tpm-C” after the tests were run for 120 seconds:

 

Toku_Maria_MySQL

Figure 1 – Orders processed @ 10 sec interval

 

Interval MariaDB 5.5.38 MariaDB 5.5.38 InnoDB Compressed TokuDB on MariaDB 5.5.38 MySQL 5.5.38 MySQL 5.5.38 InnoDB Compressed TokuDB on MySQL 5.5.38 10 5300 529 5140 5667 83 5477 20 5743 590 5112 5513 767 5935 30 5322 596 4784 5267 792 5931 40 4536 616 4215 5627 774 6107 50 5206 724 5472 5770 489 6020 60 5827 584 5527 5956 402 6211 70 5588 464 5450 6061 761 5999 80 5679 424 5474 5775 789 6029 90 5759 649 5490 6258 788 5998 100 5288 611 5584 6044 765 6026 110 4637 575 4948 5753 720 5314 120 3696 512 4459 930 472 292 Toku_Maria_MySQL_2

Figure 2 - “tpm-C” for 120 test run

MySQL Edition “tpm-C” TokuDB on MySQL 5.5.38 32669.5 MySQL 5.5.38 32310.5 MariaDB 5.5.38 31290.5 TokuDB on MariaDB 5.5.38 30827.5 MySQL 5.5.38 InnoDB Compressed Tables 4151 MariaDB 5.5.38 InnoDB Compressed Tables 3437

 

Surprisingly enough however, the InnoDB table compression results were very low – perhaps this may have shown better results on regular SAS / SATA disks with traditional rotating disks. The impact on performance was incredibly high and the savings on disk space were marginal compared to those of TokuDB so once again again it seems we have a clear winner! TokuDB on MySQL outperformed both MySQL and MariaDB with uncompressed tables. The findings are interesting because in previous benchmarks for older versions of MariaDB and MySQL, MariaDB would generally outperform MySQL however there are many factors should be considered.

These tests were performed on Ubuntu 14.04 while the previous tests I mentioned were performed on CentOS 6.5 and also the hardware was slightly different (Corsair SSD 128GB vs. Samsung EVO 256GB). Please keep in mind these benchmarks reflect the performance on a specific configurations and there are many factors that should be considered when choosing the MySQL / MariaDB edition to use in production.

As per this benchmark, the results for TokuDB were nothing less than impressive and it will be very interesting to see the results on the newer versions of MySQL (5.6) and MariaDB (10)!

Categories: DBA Blogs

Industrial-grade cyberattacks plague utilities

Chris Foot - Mon, 2014-09-15 08:57

While the retail sector has received much attention lately due to the prevalence of cyberattacks, utilities are also in grave danger of sustaining Web-based aggression.

Database administration services are acknowledging the persistence of Dragonfly, an organization of cybercriminals that focuses on infiltrating the industrial sector. Securing Web-based assets, as well as on-premise architectures, is imperative, as these carefully orchestrated hacks could leave denizens without power indefinitely 

A grievous situation
Warwick Ashford, a contributor to Computer Weekly, noted an instance in which he and his team participated in the Kaspersky Industrial Protection Simulation role-playing game. The scenario presented a situation in which a regional water purification plant was infiltrated, leaving thousands of people without drinking water. In addition, the company sustained incredible losses within a five-week period. 

The culprit? Dragonfly, a co-op based out of Eastern Europe that security firm Symantec has blamed for hacking more than 1,000 North American and European energy enterprises. Dragonfly specializes in disengaging industrial control systems, which manage electrical, water, gas and oil systems. 

Why it's so dangerous
Ashford noted the water treatment company intended to train its IT staff in cybersecurity protocols, but apparently never got around to doing so. After a number of network audits were performed, the company recognized two unidentified Internet connections to the industrial control system. The IT department responded by setting up a number of firewalls. 

However, after the security solutions were implemented, Ashford's team still received notifications of unhealthy chlorine levels in its firm's drinking water. Through phishing scams, the perpetrators were able to manipulate the industrial control system's output and filtration of chlorine. 

While the aforementioned situation was simply a simulation, that doesn't change the fact that Dragonfly is capable of inflicting such damage. Imagine the harm such a terrorist group could cause in this scenario.

Real-world situations 
FierceSmartGrid noted one instance in 2011 in which Dragonfly initiated "Energetic Bear," a kind of attack that leveraged two remote access Trojans. These creations form a kind of bridge between cybercriminals and targeted architectures. Basically, they allowed Dragonfly to insert malware into a U.S. utility's databases. 

The source noted a statement by Symantec, which maintained Dragonfly's initiatives are focused on "extracting and uploading stolen data, installing further malware onto systems and running executable files on infected computers."

It's this particular breed of cybercriminal that has instigated the need for database active monitoring. Maintaining a high level of thorough surveillance at all times is a must, especially for utilities distributing essential resources to consumers.

The post Industrial-grade cyberattacks plague utilities appeared first on Remote DBA Experts.

TCPS and SSLv2Hello

Laurent Schneider - Mon, 2014-09-15 08:19

Thanks to platform independence, the same java code work on different platforms.


import java.util.Properties;
import java.security.Security;
import java.sql.*;
import javax.net.ssl.*;

public class KeyStore {
  public static void main(String argv[]) 
      throws SQLException {
    String url="jdbc:oracle:thin:@(DESCRIPTION="+
      "(ADDRESS=(PROTOCOL=TCPS)(Host=SRV01)("+
      "Port=1521))(CONNECT_DATA=(SID=DB01)))";
    Properties props = new Properties();
    props.setProperty("user", "scott");
    props.setProperty("password", "tiger");
    props.setProperty("javax.net.ssl.trustStore",
      "keystore.jks");
    props.setProperty(
      "javax.net.ssl.trustStoreType","JKS");
    props.setProperty(
      "javax.net.ssl.trustStorePassword","***");
    DriverManager.registerDriver(
      new oracle.jdbc.OracleDriver());
    Connection conn = 
      DriverManager.getConnection(url, props);
    ResultSet res = conn.prepareCall("select "+
       "sys_context('USERENV','NETWORK_PROTOCOL"+
       "') txt from dual").
         executeQuery();
    res.next();
    System.out.println("PROTOCOL: "+
      res.getString("TXT"));
  }
}

The code above perfectly works with Linux and Windows.

Okay, in AIX you will get IllegalArgumentException SSLv2Hello at com.ibm.jsse2.sb.a if you don’t add


props.setProperty("oracle.net.ssl_version","3.0");

The default does not work with the Oracle AIX client. Just set it to 1.0 and 3.0 and you will be a bit less plateform-dependent

OOW - Focus On Support and Services for Fusion Apps/Fusion Middleware

Chris Warticki - Mon, 2014-09-15 08:00
Focus On Support and Services for Fusion Apps/Fusion Middleware   Monday, Sep 29, 2014

Conference Sessions

Oracle ERP Cloud: Overview, Implementation Strategies, and Best Practices
Patricia Burke, Director, Oracle
5:00 PM - 5:45 PM Westin Market Street - City CON7288 Understanding Patching for Your Oracle Fusion Cloud Services
Marc Lamarche, Senior Director, Global Fusion HCM Support, Oracle
5:15 PM - 6:00 PM Moscone West - 3007 CON8476 Tuesday, Sep 30, 2014

Conference Sessions

Best Practices for Maintaining Oracle Fusion Middleware
Ken Vincent, Senior Principal Technical Support Engineer, Oracle
10:45 AM - 11:30 AM Moscone West - 3022 CON8285 Wednesday, Oct 01, 2014

Conference Sessions

Modernize Your Analytics Solutions
Rob Reynolds, Senior Director, Oracle
Hermann Tse, Oracle
Gary Young, Senior Director, Big Data / Analytics, Oracle
10:15 AM - 11:00 AM Moscone West - 3016 CON5238 Wednesday, Oct 01, 2014

Conference Sessions

Is Your Organization Trying to Focus on an ERP Cloud Strategy?
Patricia Burke, Director, Oracle
Bary Dyer, Vice President, Oracle
10:00 AM - 10:45 AM Westin Market Street - Concordia CON7614 Compensation in the Cloud: Proven Business Case
ARUL_SENAPATHI@AJG.COM ARUL_SENAPATHI@AJG.COM, Director, Global Oracle HRIS
Rich Isola, Sr. Practice Director, Oracle
Kishan Kasety, Consulting Technical Manager, Oracle
12:30 PM - 1:15 PM Palace - Gold Ballroom CON2709 Succession and Talent Review at Newfield Exploration
Blane Kingsmore, HRIS Manager, Newfield Exploration
Rich Isola, Sr. Practice Director, Oracle
Louann Weaver, Practice Director, Oracle
3:00 PM - 3:45 PM Palace - Gold Ballroom CON2712 Thursday, Oct 02, 2014

Conference Sessions

Oracle Sales Cloud: Overview, Implementation Strategies, and Best Practices
Tom Griffin, Sr. Principal Consultant, Oracle
Mary Wade, Solution Manager, Oracle
10:15 AM - 11:00 AM Moscone West - 2001 CON7331 Thursday, Oct 02, 2014

Conference Sessions

Is Your Organization Trying to Focus on a CX Cloud Strategy?
John Cortez, Principle Solutions Architect, Oracle
Won Park, Consulting Solution Director, Oracle
Mary Wade, Solution Manager, Oracle
11:30 AM - 12:15 PM Moscone West - 3009 CON7575   My Oracle Support Monday Mix

Monday, Sep 29

Join us for a fun and relaxing happy hour at the annual My Oracle Support Monday Mix. This year’s gathering is Monday, September 29 from 6:00 to 8:00 p.m. at the ThirstyBear Brewing Company – just a 3 minute walk from Moscone Center. Admission is free for Premier Support customers with your Oracle OpenWorld badge. Visit our web site for more details: http://www.oracle.com/goto/mondaymix 6:00 PM - 8:00 PM ThirstyBear Brewing Company Oracle Support Stars Bar & Mini Briefing Center

Monday, Sep 29

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Tuesday, Sep 30

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Wednesday, Oct 01

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 3:45 PM Moscone West Exhibition Hall, 3461 and 3908

To secure a seat in a session, please use Schedule Builder to add to your Schedule.

SQL Saturday 323: SQL Server AlwaysOn and availability groups session slides

Yann Neuhaus - Sun, 2014-09-14 23:50

This SQL Saturday’s edition in Paris is now over. It was a great event with a lot of French and international speakers. There were also many attendees indicating that this event is a great place to share about SQL Server technologies. Maybe the Montparnasse tower in Paris played a role here with its panoramic view over Paris from the 40th floor! Smile


blog_16_landscape_from_spuinfo

blog_16_badge_sqlsaturdays


For those who didn’t attend on Saturday, you will find our SQL Server AlwaysOn and availability groups session slides here: SQLSaturday-323-Paris-2014---AlwaysOn-session.pptx

Don’t forget the next big event of the SQL Server community in Paris (1-2 december): Journées SQL Server

We will probably be there and of course we will enjoy to meet you!

Documentum upgrade project - ActiveX and D2-Client 3.1Sp1

Yann Neuhaus - Sun, 2014-09-14 19:31

This is another blog posting an our Documentum upgrade project. This time, the following issue occured: the ActiveX could not be installed using the D2-Client. We had to access the D2-Config url to have it installed. For a normal user, this could not be used.

Analyzes

The workstation had the ActiveX for D2 3.0 installed, the version before the upgrade. Under C:\\Windows\\Downloaded Program Files, we had:  

ctx
ctx.ocx  
D2UIHelper.dll

On my workstation where I could install (using D2-Config) the D2 3.1.1 ActiveX, I also had C:\\Windows\\Downloaded Program Files\\CONFLICT.* folders containing D2UIHelper.dll and ctx.inf

By checking the content of ctx.inf of this new cab I saw that we had the wrong version (see FileVersion) of the

 [ctx.ocx]  
file-win32-x86=thiscab  
RegisterServer=yes  
clsid={8C55AA13-D7D9-4539-8B20-78BC4A795681}  
DestDir=  
FileVersion=3,0,0,2

By checking the "ctx.cab" file in "D2-Client/install" and "D2-Config/install" on the application server I found that we did not have the same version, both ctx-cab had the same date and size but the digital signature was different:  

D2-Config ctx-cab: &8206;17 &8206;September &8206;2013 10:56:11,  
D2-Client: 19 &8206;April &8206;2013 17:03:08

 

Solution

To solve the issue I copied the ctx.cab" from "D2-Config/install" path to "D2-Client/install/". Once this was done the activeX could be installed using the D2-Client url.

It was confirmed by the vendor that this is a bug in the delivered package

Simple Solutions Do Not Equal Easy Builds

Floyd Teter - Sun, 2014-09-14 18:28
Am I the only one that often tries to make solutions far more difficult than needed?  My first approach to any challenge is likely the most complicated thing I could create.  For example, I was working on something with Oracle Apex last week.  Came up with what I thought was a nifty new feature and started building.  After the equivalent of several hundred lines of code, I had something that worked...just not as well as I hoped.

After sitting back and letting things percolate...with a bit of cussing and fussing...I wound up deleting everything I'd built for that nifty new feature.  Replaced it with about two minutes of work.  The replacement was probably the equivalent of 25 or 30 lines of code.  And now the feature worked exactly as I hoped.

Yeah, I'm pretty sure I could complicate a ball bearing if given the opportunity to go off and running with the first ideas that pop into my head.

My point in all this...simple solutions do not equal easy builds, at least when it comes to building solutions.  It takes brain power to refine ideas and initial concepts into simple, elegant solutions.

Eli and the Runaway Diaper now available!

FeuerThoughts - Sun, 2014-09-14 09:12

In 2013, the big sensation in (my) children's publishing was the release of Vivian Vulture and the Cleanup Culture.

In 2014, the honor goes to Eli and the Runaway Diaper.
It's a book about a diaper that gets tired of the day in day out grind of covering Eli's bottom (the names have been changed to protect the innocent). It decides that it's time to look around for a new and hopefully better (more appreciative) bottom.

Eli is initially dismayed, but happy to join the diaper on its quest, so off they go on a grand adventure!

Illustrated by Robert Melegari, it's a fun, light-hearted journey to self-discovery and self-improvement.

You can order it on Amazon,  Createspace, and so on. But if you order it from me, I will sign it and ship it off to you, all for the list price of $12.99.
Categories: Development