Christopher Jones

Subscribe to Christopher Jones feed
Oracle Blogs
Updated: 12 hours 21 min ago

See What Your Guests Think with Data Visualization

Mon, 2018-05-21 06:00

As we approach the end of May, thoughts of summer and vacations begin. Naturally, a key component is finding the best place to stay and often that means considering the hotel options at your chosen destination. But what’s the best way to decide? That’s where reading reviews is so important.   

And that brings us to the latest blog in the series of taking datasets from ‘less typical’ sources and analyzing them with Oracle Data Visualization. Here, we’ve pulled the reviews from Booking.com as a dataset and visualized it to see how we – the general public - rate the hotels we stay in.

Working with Ismail Syed, pre-sales intern, and Harry Snart, pre-sales consultant, both from Oracle UK, we ran the analysis and created visualizations. We decided to look at the most common words used in both positive and negative reviews, see how long each of them is – and work out which countries are the most discerning when they give their feedback. 

So, what are the main irritations when we go away? Conversely - what's making a good impression?

Words of discontent

First, we wanted to combine the most commonly used words in a positive review with those most likely used in a negative review. You can see these in the stacked bar chart below. Interestingly, 'room' and 'staff' both appear in the positive and negative comments list. However, there are far more positive reviews around staff than negative ones, and likewise a lot more negative reviews around the room than positive reviews.

It seems then, across the board, guests find customer service better than the standard of the rooms they receive – implying an effective way to boost client retention would be by starting with improving rooms. In particular the small size of the rooms was complained about, that’s a tough fix, but people were more upset about the standard of the beds, their bathrooms and the toilets, which can be updated a bit more easily.

You’ll also notice 'breakfast' appears prominently in both the positive and negative word clouds – so a more achievable fix could be to start there. A bad breakfast can leave a bad taste, but a good one is obviously remembered. 

Who’ll give a good review?

Next, we wanted to see who the most complimentary reviewers were, by nationality. While North Americans, Australians and Kyrgyz (highlighted in green) tend to leave the most favorable reviews, hotels have a harder time impressing those from Madagascar, Nepal and Mali (in red). Europeans sit somewhere in the middle – except for Bosnia and Herzegovina, who like to leave an upbeat review.   

Next, we wanted to see who is the most verbose in their feedback – the negative reviewers or the positive reviewers – and which countries leave the longest posts.

Are shorter reviews sweeter?

Overall, negative reviews were slightly longer, but only by a small amount – contrary to the popular belief that we tend to ‘rant’ more when we’re perturbed about something. People from Trinidad and Tobago left the longest good reviews, at an average of 29 words. Those from Belarus, the USA and Canada followed as the wordiest positive reviewers. On the flip side, the Romanians, Swedish, Russians and Germans had a lot to say about their bad experiences – leaving an average of 22 words showing their displeasure.

It's business, but also personal...

Clearly data visualization doesn't necessarily just need to be a tool just for the workplace; you can deploy it to gain an insight into other aspects as well – including helping you prepare for some valuable time off.

If you’re an IT leader your organization and need to enable insights for everyone across business, you should consider a complete, connected and collaborative analytics platform like Oracle Analytics Cloud. Why not find out a bit more and get started for free.

If you simply interested in visual analysis of your own data? Why not see what you can find out by taking a look at our short demo and signing up for an Oracle Data Visualization trial?

Either way, make sure you and your business take a vacation from spreadsheets and discover far more from your data through visualization.

HR today: right skills, right place, right time, right price

Mon, 2018-05-21 05:49

The only constant in today’s work environment is change. If you’re going to grow and stay competitive in this era of digital transformation, your business has to keep up—and HR must too.

A wide range of factors all mean that HR constantly has to grow and transform—changing demographics, new business models, economic uncertainty, evolving employee expectations, the bring-your-own-device revolution, increased automation, AI, the relentless search for cost savings, and more.

Things are different today. In the past, business change processes typically had a start and target end date, with specific deliverables that were defined in advance. Now change is open-ended, and its objectives evolve over time—based on the world as it is, rather than a set of assumptions. An agile model for transformation is therefore essential, along with a decision-making process that can survive constant change.

The fact is that people are still—and will always be—the most important part of any business, so HR has to be closely aligned to your overall business goals, delivering benefits to the whole organisation. Every move your HR team makes should be focused on how to deliver the right skills in the right place, at the right time and at the right price, to achieve your business’s goals.

 

Workforce planning

To manage your workforce effectively as the needs of your business change, you need to know what talent you have, where it’s located—and also what skills you are likely to need in the future. It’s much easier to fill skills gaps when you can see, or anticipate, them.

 

Deliver maximum value from your own people

And it’s much easier to do if you’ve already nurtured a culture of personal improvement. Giving people new opportunities to learn and develop, and a sense of control over their own careers will help you maintain up-to-date skills within your business and also identify the most ideal candidates—whether for promotion, relocation within the company or to take on specific roles. Moreover, it should enable them to, for example, pursue areas of personal interest, train for qualifications, or perhaps work flexibly—all of which will improve loyalty and morale.

You can also look for skills gaps that you absolutely must recruit externally to fill, and understand how best to do that, especially at short notice. What are the most cost-efficient and effective channels, for example? You might consider whether offshoring for skills is helpful, or maintaining a base of experienced temporary workers that you can call on.

 

Unknown unknowns

Yet these are all known gaps. Organisations now also have to consider recruiting people for unknown jobs too. Some estimates suggest that as much as two-thirds of primary school children will end up working in jobs that don’t yet exist. So what new roles are being created in your industry, and how are you selecting people that will be able to grow into them?

 

Maximise the value of your HR function

Your HR organisation must be capable of, and ready to support these changes, and that means three things. First, the strategic workforce planning activities described above, supported by modern data and analytics. Next, HR has to provide the very best employee experience possible, enabling personal development and support. Finally, they need to be able to support the process of constant change itself, and move to a more agile way of operating.

 

Get the culture right

Creating and nurturing a strong culture is essential here, and that relies on close co-ordination between HR, line managers and employees. Having a core system of record on everyone’s roles and various skills supports all these objectives, and can help you to grow your business through the modern era of change.

 

Essential enablers for implementing a modern product strategy

Mon, 2018-05-21 05:49

Continuous improvement across your entire mix of products and services is essential to innovate and stay competitive nowadays. Digital disruption requires companies to transform, successfully manage a portfolio of profitable offerings, and deliver unprecedented levels of innovation and quality. But creating your product portfolio strategy is only the first part—four key best practices are necessary to successfully implement it.

New technologies—the Internet of Things (IoT), Big Data, Social Media, 3D printing, and digital collaboration and modelling tools—are creating powerful opportunities to innovate. Increasingly customer-centric propositions are being delivered ‘as-a-service’ via the cloud, with just-in-time fulfilment joining up multiple parts of the supply chain. Your products and services have to evolve continually to keep up, causing massive amounts of data to be generated that has to be fed back in to inform future development.

 

Common language

To minimise complexity, it’s essential that there is just one context for all communication. You therefore need a standardised—and well-understood—enterprise product record that acts as a common denominator for your business processes. And that means every last piece of information—from core service features to how your product uses IoT sensors; from business processes to your roadmap for innovation, and all other details—gets recorded in one place, in the same way, for every one of your products, from innovation through development to commercialisation.

That will make it far easier for you to collect and interpret product information; define service levels and deliver on them; support new business models, and manage the overall future design of your connected offerings. Moreover, it enables your product development methods to become more flexible, so they can be updated more frequently, enabled by innovations in your supply chain, supported more effectively by IT, and improved over time.

 

Greater quality control in the digital world…

By including form, fit and function rules—that describe the characteristics of your product, or part of it—within the product record, you add a vital layer of change control. It enables you to create a formal approvals process for quality assurance. For example, changes made in one area—whether to a product or part of it—may create problems in other areas. The form, fit and function rules force you to perform cross-functional impact analyses and ensure you’re aware of any consequences.

As part of this, you can run simulations with ‘digital twins’ to predict changes in performance and product behaviour before anything goes wrong. This obviously has major cost-saving implications, enabling far more to be understood at the drawing-board stage. Moreover, IoT applications can be leveraged to help product teams test and gather data of your connected assets or production facilities.

 

Transparency and effective communications

The enterprise product record should also contain a full audit trail of decisions about the product, including data from third parties, and from your supply chain. The objective is full traceability from the customer perspective—with evidence of regulatory compliance, provenance of preferred suppliers, and fully-auditable internal quality processes. Additionally, it’s often helpful to be able to prove the safety and quality of your product and processes, as that can be a key market differentiator. Powerful project management and social networking capabilities support the collaborative nature of the innovation process.

 

Lean and efficient

Overall, your innovation platform should be both lean and efficient, based on the continual iteration of the following key stages:

  • Ideation, where you capture, collaborate and analyse ideas
  • Proposal, where you create business cases and model potential features
  • Requirements, where you evaluate, collaborate and manage product needs
  • Concepts, where you accelerate product development and define structures
  • Portfolio analysis, where you revise and optimise your product investment
  • Seamless Integration with downstream ERP and Supply Chain processes

 

The result: Powerful ROI

Being able to innovate effectively in a digital supply chain delivers returns from both top-line growth—with increased revenues and market share—and reduced costs from improved safety, security, sustainability and fewer returns.

 

 

Cloud: Look before you leap—and discover unbelievable new agility

Mon, 2018-05-21 05:48

All around the world, finance teams are now fully embracing the cloud to simplify their operations. The heady allure of reduced costs, increased functionality, and other benefits are driving the migration. Yet what’s getting people really excited is the unexpected flush of new business agility they experience after they’ve made the change.

At long last, the cloud is becoming accepted as the default environment to simplify ERP and EPM. Fifty-six percent* of finance teams have already moved to the cloud—or will do so within the next year—and 24% more plan to move at some point soon.

 

Major cost benefits in the cloud

Businesses are making the change to enjoy a wide range of benefits. According to a recent survey by Oracle*, reducing costs is (predictably) the main motivation, with improved functionality in second place—and culture, timing and the ability to write-off existing investments also key factors. The financial motivation breaks down into a desire to avoid infrastructure investment and on-premises upgrades, and also to achieve a lower total cost of ownership.

And Cloud is delivering on its promise in all these areas—across both ERP and EPM, 70% say they have experienced economic benefits after moving to the cloud.

 

Leap for joy at cloud agility

But the biggest overall benefit of moving to the cloud—quoted by 85% of those who have made the change—is staying current on technology. Moreover, 75% say that cloud improves usability, 71% say it increases flexibility and 68% say that it enables them to deploy faster. Financial gain is the top motivation for moving to the cloud, but that’s only the fourth-ranked advantage overall once there. It turns out that the main strengths of the cloud are in areas that help finance organisations improve business agility.

These are pretty amazing numbers. It would be unheard of, until fairly recently, for any decent-sized organisation to consider migrating its core ERP or EPM systems without a very, very good reason. Now, the majority of companies believe that the advantages of such a move—and specifically, moving to the cloud—overwhelm any downside.

 

The commercial imperative

Indeed, the benefits are more likely viewed as a competitive necessity. Cloud eliminates the old cycle of new system launches every two or three years—replacing it with incremental upgrades several times each year, and easy, instant access to additional features and capabilities.

And that is, no doubt, what’s behind the figures above. Finance professionals have an increasingly strong appetite to experiment with and exploit the latest technologies. AI, robotic process automation, internet of things, intelligent bots, augmented reality and blockchain are all being evaluated and used by significant numbers of organisations.

They’re improving efficiency in their day-to-day operations, joining-up operating processes across their business and reducing manual effort (and human error) through increased automation. Moreover, AI is increasingly being applied to analytics to find answers to compelling new questions that were, themselves, previously unthinkable—providing powerful new strategic insights.

Finance organisations are becoming more agile—able to think smarter, work more flexibly, and act faster using the very latest technical capabilities.

 

But it’s only available via cloud-based ERP and EPM

Increasingly, all these advances are only being developed as part of cloud-based platforms. And more and more advanced features are filtering down to entry-level cloud solutions—at least in basic form—encouraging finance people everywhere to experiment with what’s possible. That means, if you’re not yet using these tools in the cloud, you’re most likely falling behind your competitors that are—and that applies both from the broader business perspective as well as from the internal operating competency viewpoint.

The cloud makes it simple to deploy, integrate and experiment with new capabilities, alongside whatever you may already have in place. It has become the new normal in finance. It seems like we’re now at a watershed moment where those that embrace the potential of cloud will accelerate away from those that do not, and potentially achieve unassailable new operating efficiencies.

The good news is that it’s easy to get started.  According to MIT Technology Review in a 2017 report, 86% of those making a transition to the cloud said the costs were in line with, or better than expected, and 87% said that the timeframe of transition to the cloud was in line with, or better than expected.

_______

* Except where stated otherwise, all figures in this article are taken from ‘Combined ERP and EPM Cloud Trends for 2018’, Oracle, 2018.

 

You’ve got to start with the customer experience

Mon, 2018-05-21 05:47

Visionary business leader Steve Jobs once remarked: ‘You’ve got to start with the customer experience and work backwards to the technology.’ From someone who spent his life creating definitive customer experiences in technology itself, these words should carry some weight—and are as true today as ever.

The fact is that customer experience is a science, and relevance is its key goal. A powerful customer experience is essential to compete today. And relevance is what cuts through the noise of the market to actually make the connection with customers.

 

The fundamentals of success

For companies to transform their customer experience, they need to be able to streamline their processes and create innovative customer experiences. They also have to be able to deliver by connecting all their internal teams together so they always speak with one consistent voice.

But that’s only part of the story. Customers have real choice today. They’re inundated with similar messages to yours and are becoming increasingly discerning in their tastes.

Making yourself relevant depends on the strength of your offering and content, and the effectiveness of your audience targeting. It also depends on your technical capabilities. Many of your competitors will already be experimenting with powerful new technologies to increase loyalty and drive stronger margins.

 

The value of data

Learning to collect and use relevant customer data is essential. Data is the lifeblood of modern business—it’s the basis of being able to deliver any kind of personalised service on a large scale. Businesses need to use data to analyse behaviour, create profiles for potential new customers, build propositions around those target personas and then deliver a compelling experience. They also need to continually capture new data at every touchpoint to constantly improve their offerings.

Artificial intelligence (AI) and machine learning (ML) have a key role to play both in the analysis of the data and also in the automation of the customer experience. These technologies are developing at speed to enable us to improve our data analysis, pre-empt changing customer tastes and automate parts of service delivery.

 

More mature digital marketing

You can also now add in all kinds of technologies to the customer experience mix that are straight out of sci-fi. The internet of things (IoT) is here, with connected devices providing help in all kinds of areas—from keeping you on the right road to telling you when your vehicle needs maintenance, from providing updates on your order status to delivering personal service wherever you are, and much more—enabling you to drive real transformation.

Moreover, intelligent bots are making it much easier to provide high-quality, cost-effective, round-the-clock customer support—able to deal with a wide range of issues—and using ML to improve their own performance over time.

Augmented reality makes it possible to add contextual information, based on your own products and services, to real-world moments. So, if you’re a car manufacturer you may wish to provide help with simple roadside repairs (e.g. change of tire) via a smartphone app.

 

Always omnichannel

Finally, whether at the pre-sale or delivery stage, your customer experience platform must give you the ability to deliver consistency at every touchpoint. Whatever channel, whatever time, whatever context, your customers must all believe that your whole business is one person.

Indeed, as Michael Schrage, author of the Harvard Business Review, said: ‘Innovation is an investment in the capabilities and competencies of your customers. Your future depends on their future.’ So you have to get as close as possible to your customers to learn what they want today, and understand what experiences they are likely to want tomorrow. Work backwards from that and use any technology that can help you deliver it.

How APIs help make application integration intelligent

Mon, 2018-05-21 05:47

Artificial intelligence (AI) represents a technology paradigm shift, with the potential to completely revolutionise the way people work over the next few years. Application programming interfaces (APIs) are crucially important in enabling the rapid development of these AI applications. Conversely AI is also being used to validate APIs, themselves, and also to analyse and optimise their performance.

Wikipedia defines an API as a ‘set of subroutine definitions, protocols and tools for building application software’. In slightly less dry terms, an API is basically a gateway to the core capabilities of an application, enabling that functionality to be built into other software. So, for example, if you were creating an app that needed to show geographic location, you might choose to implement Google Maps’ API. It’s obviously much easier, faster and future-proof to do that than to build your own mapping application from scratch.

 

How APIs are used in AI

And that’s the key strength of API—it’s a hugely efficient way of enabling networked systems to communicate and draw on each other’s functionality, offering major benefits for creating AI applications.

Artificially intelligent machine ‘skills’ are, of course, just applications that can be provided as APIs. So if you ask your voice-activated smart device—whether it’s Siri, Cortana, Google Assistant, or any of the rest—what time you can get to the Town Hall via bus, it’s response will depend on various skills that might include:

  • Awareness of where you are—from a geo-location API
  • Knowledge of bus routes and service delays in your area—from a publicly available bus company API
  • Tracking of general traffic and passenger levels—from APIs that show user locations provided by mobile device manufacturers
  • Being able to find the town hall—from a mapping API

None of these APIs needs to know anything about the others. They simply take information in a pre-defined format and output data in their own way. The AI application, itself, has to understand each API’s data parameters, tie all their skills together, apply the intelligence and then process the data.

 

Everything is possible

That means you can combine the seemingly infinite number of APIs that exist in any way you like, giving you the power to produce highly advanced applications—and create unique sources of value for your business. You could potentially build apps to enhance the customer experience, improve your internal processes, and analyse data more effectively to strengthen decision making—and perhaps even identify whole new areas of business to get into.

 

How AI is being used to improve APIs

APIs are the ideal way of getting information into AI applications and also helping to streamline analytics—yet artificial intelligence also has a vital role to play within API development itself. For example, AI can be used to automatically create, validate and maintain API software development kits (implementations of APIs in multiple different programming languages).

AI can also be used to monitor API traffic. By analysing calls to APIs using intelligent algorithms, you can identify problems and trends, potentially helping you tailor and improve the APIs over time. Indeed, AI can be used to analyse internal company system APIs, for example, helping you score sales leads, predict customer behaviour, optimise elements of your supply chain, and much more.

 

GDPR: What are the priorities for the IT department?

Mon, 2018-05-21 05:46

All too often it is assumed that GDPR compliance is ‘IT’s problem’ because having your personal data and technology in order are such vital parts of it. But compliance must be an organisation-wide commitment. No individual or single department can make an organisation compliant. However, in planning discussions around GDPR compliance, there are clear areas where IT can add significant value.

 

1. Be a data champion

The potential value of data to organisations is increasing all the time, but many departments, business units and even board members may not realise how much data they have access to, where it resides, how it is created, how it could be used and how it is protected. The IT department can play a clear role in helping organisations understand why data, and by extension GDPR, is so important in order to realise the value of such data and how to use and protect it.

 

2. Ensure data security

GDPR considers protection of personal data a fundamental human right. Organisations need to ensure they understand what personal data they have access to and put in place appropriate protective measures. IT has a role to play in working with the organisation to assess security risks and ensure that appropriate protective measures, such as encryption, access controls, attack prevention and detection are in place.

 

3. Help the organisation be responsive

GDPR requires organisations to not only protect personal data but also respond to requests from individuals who, among others, want to amend or delete data held on them. That means that the personal data must be collected, collated and structured in a way that enables effective and reliable control over all personal data. This means breaking down internal silos and ensuring an organisation has a clear view of its processing activities with regard to personal data.

 

4. Identify the best tools for the job

GDPR compliance is as much about process, culture and planning as it is about technology. However, there are products available which can help organisations with key elements of GDPR compliance, such as data management, security and the automated enforcement of security measures. Advances in automation and artificial intelligence mean many tools offer a level of proactivity and scalability which don’t lessen the responsibility upon people within the organisation, but can reduce the workload and put in place an approach which can evolve with changing compliance requirements.

 

5. See the potential

An improved approach to security and compliance management, fit for the digital economy, can give organisations the confidence to unlock the full potential of their data. If data is more secure, better ordered and easier to make sense of, it stands to reason an organisation can do more with it. It may be tempting to see GDPR as an unwelcome chore. It should however be borne in mind that it is also an opportunity to seek differentiation and greater value, to build new data-driven business models, confident in the knowledge that the data is being used in a compliant way.  Giving consumers the confidence to share their data is also good for businesses.

 

The IT department will know better than most how the full value of data can be unlocked and can help businesses pull away from seeing GDPR as a cost of doing business and start seeing it as an opportunity to do business better.

Autonomous: A New Lens for Analytics

Mon, 2018-05-21 05:45

Welcome to the era of intelligent, self-driving software. Just as self-driving vehicles are set to transform motoring, self-driving software promises to transform our productivity, and strengthen our analytical abilities.

Perhaps you drive an automatic car today—how much are you looking forward to the day your car will automatically drive you? And how much more preferable would smoother, less time-consuming journeys be—always via the best route, with fewer hold-ups, and automatically avoiding unexpected road congestion—where you only have to input your destination? The technology is almost here, and similar advances are driving modern business applications.

AI and machine learning are finally coming of age thanks to the recent advances in big data that created—for the first time—data sets that were large enough for computers to draw inferences and learn from. That, along with years of SaaS application development in cloud computing environments, means that autonomous technology—harnessing both AI and business intelligence—is now fuelling self-driving software… for both cars and cloud applications.

 

Autonomy—beyond automation

Automation has, of course, been around for years. But autonomy—running on AI and machine learning—takes it to new levels. Today’s software is truly self-driving—it eliminates the need for humans to provision, secure, monitor, back-up, recover, troubleshoot or tune. It upgrades and patches itself, and automatically applies security updates, all while running normally. Indeed, an autonomous data warehouse, for example, can reduce administration overheads by up to 80%.

 

Intelligent thinking

But the greatest value is perhaps in what AI enables you to discover from your data. When applied to analytics, it can identify patterns in huge data sets that might otherwise go unnoticed. So, for example, you could apply AI to sales data to identify trends—who bought what, where, when and why?—and apply those to improve the accuracy of your future forecasts.

Alternatively, if you were looking for a vibrant location for new business premises, you might use AI to search for an area with a strong social media buzz around its restaurants and bars. You could teach the software to look for specific words or phrases, and harness machine learning to improve results over time.

AI technology is already widely used in HR to take the slog out of sifting through huge numbers of job applications. As well as being faster and requiring less manpower, it’s able to remove both human bias—critical in the highly subjective area of recruitment—and also identify the best candidates based on factors such as the kind of language they use.

 

Knowledge and power for everyone

These technologies are coming online now—today—for everyone. In the past, most database reporting was typically run by data analysts or scientists to update pre-existing dashboards and reports. Nowadays there are many more business users who are demanding access to such insights, which is being made possible by tools that are far easier to use.

Anyone can experiment with large samples of different data sets, combining multiple data formats—structured and unstructured—and discovering new trends. They can get answers in context, at the right time, and convert them into simple-to-understand insights, enabling decisions to be made more quickly for competitive advantages.

 

Smarter and smarter…

Yet it’s the strength of those insights that’s really compelling. As one commentator observed: ‘Machine intelligence can give you answers to questions that you haven’t even thought of.’ The quality of those answers—and their underlying questions—will only improve over time. That’s why it’s becoming a competitive imperative to embrace the power of intelligent analytics to ensure you can keep pace with market leaders.

 

Discover how…

In my last blog, I shared how organisations can profit from data warehouses and data marts, and how Oracle’s self-driving, self-securing, and self-repairing Autonomous Data Warehouse saves resources on maintenance allowing investment in data analytics.

 

CPQ is an Auditor’s Best Friend

Mon, 2018-05-21 03:00

By Andy Pieroux, Founder and Managing Director of Walpole Partnership Ltd.  

One of the reasons many companies invest in a Configure, Price and Quote (CPQ) system is to provide a robust audit trail for their pricing decisions. Let’s take a look at why, and how CPQ can help.


First, apologies if you are an auditor. I’ve always been on the business side - either in sales, sales management, or as a pricing manager. I can appreciate your view may be different from the other side of the controls. Perhaps by the end of this article our points of view may become closer?

If your business has the potential to get audited, I know that I can speak on your behalf to say we all just love being audited. We love the time taken away from our day jobs. We love the stress of feeling that something may be unearthed that exposes us or gets us in trouble, even if we’ve never knowingly done anything wrong. We love the thought of our practices being exposed as 'in need of improvement' and relish the chance to dig through old documents and folders to try and piece together the story of why we did what we did… especially when it was several years ago. Yes sir, bring on the audit.

The reason we love it so much is that in our heart of hearts, we know audits are needed for our organization to prosper in the future. We dread the thought that our company might be caught up in a scandal like the mis-selling of pensions, or PPI (payment protection insurance), or serious accounting frauds like Enron.

It was scandals like Enron in the early 2000s that gave rise to stricter audit requirements and Sarbanes-Oxley (SOX).  This set a high standard required for internal controls, and much tougher penalties for board members who fail to ensure that financial statements are accurate. The role of pricing decisions (e.g. who authorized what and when), and the accuracy of revenue reporting becomes paramount when evidencing compliance with audit arrangements such as this.

At this point, a CPQ system can be the simple answer to your audit needs. All requests for discount, and the way revenue is allocated across products and services is documented. All approvals can be; attributed to an individual, time stamped, and with reasons captured at the time of approval. More importantly, the ability to show an auditor the entire history of a decision and to follow the breadcrumbs from a signed deal all the way to reported revenue at the click of a button means you have nothing to hide, and a clear understanding of the decisions. This is music to an auditor’s ears. It builds trust and confidence in the process and means any anomalies can be quickly analyzed.

When you have all this information securely stored in the cloud, under controlled access to only those who need it, and a tamper-proof process, that means it is designed with integrity in mind, and makes the process of passing an audit so much easier. All the anxiety and pain mentioned above disappears. Auditors are no longer the enemy. You will find they can help advise on improvements to the rules in your system to make future audits even more enjoyable. Yes - that’s right…. I said it. Enjoyable Audits!

So, CPQ is an auditor’s friend, and an auditee’s friend too. It doesn’t just apply to the big-scale audit requirements like SOX, but any organization that is auditable. Whether you’re a telecommunications company affected by IFRS 15, an organization impacted by GDPR, or any one of a thousand other guidelines, rules or quality policies that get checked - having data and decisions stored in a CPQ system will make you love audits too.

 

 

A node-oracledb Web Service in Docker

Thu, 2018-05-17 02:28

This post shows how to run a node-oracledb application in a Docker Container. For bonus points, the application connects to an Oracle Database running in a second container.

The steps are the 'show notes' from a recent talk at Oracle Code.

The demo app is a simple Express web service that accepts REST calls.

 

DOCKER

Oracle Docker images are available from https://store.docker.com/ and also mirrored on https://container-registry.oracle.com

If you're not familiar with Docker, it helps to know basic terminology:

  • Images: Collection of software to be run as a container. Images are immutable. Changes to an image require a new image build.

  • Registry: Place to store and download images.

  • Container: A lightweight standalone, executable piece of software that includes everything required to run it on a host. Containers are spun up from images. Containers are non-persistent. Once a container is deleted, all files inside that container are gone.

  • Docker engine: The software engine running containers.

  • Volumes: Place to persist data outside the container.

CONFIGURE A DOCKER HOST

For my host, I used Oracle Linux 7, which has the ol7_latest and ol7_uekr4  channels already enabled.

  • Install the Docker engine as the root user by running 'sudo su -', or prefix each command with 'sudo':

    # yum-config-manager --enable ol7_addons # yum install docker-engine # systemctl enable docker # systemctl start docker
DOWNLOAD INSTANT CLIENT AND DATABASE DOCKER IMAGES
  • Sign in to the container registry https://container-registry.oracle.com/ with your (free) Oracle "single sign-on" (SSO) credentials.

  • Accept the license on the container registry.

  • On your OL7 Docker host, log in to the registry. Remember to run Docker commands as 'root':

    # docker login container-registry.oracle.com

    This prompts for your Oracle SSO credentials.

  • Get the Oracle Database and Oracle Instant Client images:

    # docker pull container-registry.oracle.com/database/enterprise:12.2.0.1 # docker pull container-registry.oracle.com/database/instantclient:12.2.0.1

    This can take a while. For testing, you may want to pull the smaller, 'slim' version of the database.

  • View the installed images with:

    # docker images REPOSITORY TAG IMAGE ID CREATED SIZE container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB
CREATE A DATABASE CONTAINER FROM THE DATABASE IMAGE
  • Start the database container:

    # docker run -d --name demodb -P container-registry.oracle.com/database/enterprise:12.2.0.1

    The '-P' option maps the ports used, allowing access to the database from outside the container.

  • Check for its health and wait until it shows 'healthy'

    # docker ps CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES 9596bc2345d3 [...]/database/[...] "/bin/sh -c '/bin/..." Up 3 hours (healthy) ...->1521/tcp, ...->5500/tcp demodb
  • Find the database container's IP address: # docker inspect -f "{{ .NetworkSettings.IPAddress }}" demodb

    You will use this IP in database connect strings in your applications.

  • You can stop and start the container as desired:

    # docker stop demodb # docker start demodb

    The data is persistent as long as the container exists. Use 'docker ps --all' to show all containers, running or not.

CREATE A NEW SCHEMA
  • Create a SQL file called createschema.sql:

    SET ECHO ON ALTER SESSION SET CONTAINER=orclpdb1; DROP USER scott CASCADE; CREATE USER scott IDENTIFIED BY tiger; GRANT CONNECT, RESOURCE TO scott; ALTER USER scott QUOTA UNLIMITED ON USERS; DROP TABLE scott.bananas; CREATE TABLE scott.bananas (shipment VARCHAR2(4000) CHECK (shipment IS JSON)); INSERT INTO scott.bananas VALUES ('{ "farmer": "Gita", "ripeness": "All Green", "kilograms": 100 }'); INSERT INTO scott.bananas VALUES ('{ "farmer": "Ravi", "ripeness": "Full Yellow", "kilograms": 90 }'); INSERT INTO scott.bananas VALUES ('{ "farmer": "Mindy", "ripeness": "More Yellow than Green", "kilograms": 92 }'); COMMIT; EXIT

    For this demo, you can see I used the Oracle Database 12.1.0.2 JSON data type.

  • Execute createschema.sql in your favorite tool, such as SQL*Plus.

    In my case I actually ran SQL*Plus on my Docker host machine. Cheating a bit on giving details here, I had downloaded the Instant Client Basic and SQL*Plus packages and unzipped as shown in the the Instant Client instructions. I then set my shell to use the SQL*Plus binary:

    # export LD_LIBRARY_PATH=/home/cjones/instantclient_12_2 # export PATH=/home/cjones/instantclient_12_2:$PATH

    Using the database IP address as shown earlier you can now run the script in SQL*Plus against the container database. In my environment the database IP was 172.17.0.2:

    # sqlplus -l sys/Oradoc_db1@172.17.0.2/orclpdb1.localdomain as sysdba @createschema.sql

    The database password and service name shown are the defaults in the image.

CREATE A NODE.JS IMAGE

Let's add Node.js to the Instant Client image.

  • Create a sub-directory nodejs-scripts

    # mkdir nodejs-scripts
  • Create a new file 'nodejs-scripts/Dockerfile'. This is the 'recipe' for building a Docker image. Here Node.js is added to the Instant Client image to create a new image usable by any Node.js application. The Node.js 8 package for Oracle Linux is handy.

    The Dockerfile should contain:

    FROM container-registry.oracle.com/database/instantclient:12.2.0.1 ADD ol7_developer_nodejs8.repo /etc/yum.repos.d/ol7_developer_nodejs8.repo RUN echo proxy=http://my-proxy.example.com:80 >> /etc/yum.conf RUN yum -y update && \ rm -rf /var/cache/yum && \ yum -y install nodejs

    The FROM line shows that we base our new image on the Instant Client image.

    If you are not behind a proxy, you can omit the proxy line. Or change the line to use your proxy.

    For quick testing, you may want to omit the 'yum -y update' command.

  • The Dockerfile ADD command copies 'ol7_developer_nodejs8.repo' from the host file system into the image's file system. Create 'nodejs-scripts/ol7_developer_nodejs8.repo' containing:

    [ol7_developer_nodejs8] name=Oracle Linux $releasever Node.js 8 Packages for Development and test ($basearch) baseurl=https://yum.oracle.com/repo/OracleLinux/OL7/developer_nodejs8/$basearch/ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle gpgcheck=1 enabled=1
  • Now the new image with Oracle Instant Client and Node.js 8 can be built using this Dockerfile:

    docker build -t cjones/nodejs-image nodejs-scripts

    The 'cjones/nodejs-image' is the image name, not a directory path.

  • You can see the new image has been created:

    # docker images REPOSITORY TAG IMAGE ID CREATED SIZE cjones/nodejs-image latest e048b739bb63 29 minutes ago 1.51GB container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB
CREATE A NODE.JS DEMO IMAGE

The new Node.js image is refined by installing our demo application. This creates another new image that we can later run whenever we want to use the application.

  • Create a sub-directory 'ws-demo-scripts':

    # mkdir ws-demo-scripts
  • Create a new file 'ws-demo-scripts/Dockerfile' containing:

    FROM cjones/nodejs-image ENV https_proxy=http://my-proxy.example.com:80 WORKDIR workdir COPY package.json package.json COPY server.js server.js RUN npm install CMD ["npm", "start"]

    The first line shows the new image should be based on the Node.js image 'cjones/nodejs-image' created in the section above.

    Again, adjust the proxy line as needed by your network.

    You can see the Dockerfile copies two files from our host file system into the image. These files are shown below.

    When the image is created, the RUN command will install the Node.js dependencies from package.json.

    When a container starts, the CMD action is taken, which runs 'npm start', in turn invoking the 'main' target in package.json. Looking below to the package.json content, you can see this means 'node server.js' is run.

  • Create a file 'ws-demo-scripts/package.json' containing:

    { "name": "banana-farmer", "version": "1.0.0", "description": "RESTful API using Node.js Express Oracle DB", "main": "server.js", "author": "Oracle", "license": "Apache", "dependencies": { "body-parser": "^1.18.2", "express": "^4.16.0", "oracledb": "^2.2.0" } }

    As obvious, this application installs the body-parser module, the node-oracledb module, and also express. This demo is an Express web service application. And yes, it is a Banana Farmer web service.

    The default run target of package.json is the application file 'server.js'.

  • Create the application file 'ws-demo-scripts/server.js' containing the contents from here.

    The demo application is just this one file.

  • Build the demo image:

    # docker build -t cjones/ws-demo ws-demo-scripts

    We now have our fourth image which contains our runnable application:

    # docker images REPOSITORY TAG IMAGE ID CREATED SIZE cjones/ws-demo latest 31cbe6d2ea4e 21 seconds ago 1.51GB cjones/nodejs-image latest e048b739bb63 29 minutes ago 1.51GB container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB
DEMO APPLICATION OVERVIEW

The Banana Farmer scenario is that shipments of bananas from farmers are recorded. They can have a farmer name, ripeness, and weight. Shipments can be inserted, queried, updated or deleted.

Let's look at a couple of snippets from ws-demo-scripts/server.js.

A connection helper creates a pool of database connections:

oracledb.createPool({  user: process.env.NODE_ORACLEDB_USER, password: process.env.NODE_ORACLEDB_PASSWORD, connectString: process.env.NODE_ORACLEDB_CONNECTIONSTRING }, . . .

The credentials are taken from environment variables. When we run the app container we will pass value for those environment variables into the container.

The application has Express routes for REST GET, POST, PUT and DELETE calls. The code to handle a GET request looks like:

// HTTP method: GET // URI : /bananas/FARMER // Get the banana shipment for FARMER app.get('/bananas/:FARMER', function (req, res) { doGetConnection(res, function(err, connection) { if (err) return; connection.execute( "SELECT b.shipment FROM bananas b WHERE b.shipment.farmer = :f", { f: req.params.FARMER }, function (err, result) { if (err) { res.set('Content-Type', 'application/json'); res.status(500).send(JSON.stringify({ status: 500, message: "Error getting the farmer's profile", detailed_message: err.message })); } else if (result.rows.length < 1) { res.set('Content-Type', 'application/json'); res.status(404).send(JSON.stringify({ status: 404, message: "Farmer doesn't exist", detailed_message: "" })); } else { res.contentType('application/json'); res.status(200).send(JSON.stringify(result.rows)); } doRelease(connection, "GET /bananas/" + req.params.FARMER); }); }); });

Express makes it easy. It handles the routing to this code when a GET request with the URL '/bananas/<name>' e.g. '/bananas/Gita' is called. This simply binds the URL route parameter containing the farmer’s name into the SELECT statement. Binding is important for security and scalability, as you know. The SQL syntax used is the JSON 'dot' notation of Oracle Database 12.2 but it could be rewritten to work with 12.1.0.2.

The bulk of the code is error handling, looking after the cases where there was a processing error or no rows returned. It sends back HTTP status codes 500 or 404, respectively.

The success code path sends back the query output 'result.rows' as a JSON string, with the HTTP success status code 200.

START THE DEMO CONTAINER

Let's run the application.

  • Create a file 'ws-demo-scripts/envfile.list' with the credentials for the application. Use the IP address of your database container found with the 'docker inspect' command shown previously. In my environment, the database IP address was '172.17.0.2'

    NODE_ORACLEDB_USER=scott NODE_ORACLEDB_PASSWORD=tiger NODE_ORACLEDB_CONNECTIONSTRING=172.17.0.2/orclpdb1.localdomain
  • Start the Node.js web service container

    # docker run -d --name nodejs -P --env-file ws-demo-scripts/envfile.list cjones/ws-demo
STATUS CHECK
  • To recap what's happened, the Docker images are:

    # docker images REPOSITORY TAG IMAGE ID CREATED SIZE cjones/ws-demo latest 25caede29b17 12 minutes ago 1.51GB cjones/nodejs-image latest 138f2b76ffe7 13 minutes ago 1.51GB container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB

    Two base images were downloaded, An image with Node.js was created from the Instant Client image. Finally a fourth image 'cjones/ws-demo' with the Node.js, Instant Client and the application code was created.

  • We have started database ('demodb') and application containers ('nodejs'):

    # docker ps CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES 2924e1225290 cjones/ws-demo ”npm start" Up 3 hours nodejs 9596bc2345d3 [...]/database/[...] "/bin/sh -c '/bin/..." Up 3 hours (healthy) ...->1521/tcp, ...->5500/tcp demodb

    We found the IP address of the database container, and knew (by reading the container registry documentation) the default credentials of the SYS user.

    We created a schema SCOTT on the database, with a table containing some JSON data.

    An application container was started, with the database application credentials and connection string specified in an environment file outside the container.

SUBMIT REST REQUESTS

Now we can call our application, and it will access the database.

  • Install the browser extension HttpRequester (in Firefox) or Postman (in Chrome).

  • Find the IP of the demo web service container:

    # docker inspect -f "{{ .NetworkSettings.IPAddress }}" nodejs

    In my environment, it was '172.17.0.3'. Use this with the port (3000) and various endpoints (e.g. '/bananas/<farmer>') defined in server.js for REST requests.

  • In the HttpRequester or Postman extensions you can make various REST calls.

    Get all shipments:

    GET http://172.17.0.3:3000/bananas

    Get one farmer's shipment(s):

    GET http://172.17.0.3:3000/bananas/Gita

    New data:

    POST http://172.17.0.3:3000/bananas { "farmer" : "CJ", "ripeness" : "Light Green", "kilograms" : 50 }

    Update data:

    PUT http://172.17.0.3:3000/bananas/CJ { "farmer" : "CJ", "ripeness" : "50% Green, 50% Yellow", "kilograms" : 45 }

    Here's a screenshot of HttpRequester in action doing a GET request to get all banana shipments. On the left, the red boxes show the URL for the '/bananas' endpoint was executed as a GET request. On the right, the response shows the success HTTP status code of 200 and the returned data from the request:

    Screenshot of HttpRequester
  • When you are finished with the containers you can stop them:

    # docker stop demodb # docker stop nodejs

    If you haven't tried Docker yet, now is the perfect time! They make deployment and development easy. Oracle's Docker images let you get started with Oracle products very quickly.

Efficient and Scalable Batch Statement Execution in Python cx_Oracle

Fri, 2018-04-27 00:06
cx_Oracle logo

 

 

Today's guest post is by Oracle's Anthony Tuininga, creator and lead maintainer of cx_Oracle, the extremely popular Oracle Database interface for Python.

 

 

 

Introduction

This article shows how batch statement execution in the Python cx_Oracle interface for Oracle Database can significantly improve performance and make working with large data sets easy.

In many cx_Oracle applications, executing SQL and PL/SQL statements using the method cursor.execute() is perfect. But if you intend to execute the same statement repeatedly for a large set of data, your application can incur significant overhead, particularly if the database is on a remote network. The method cursor.executemany() gives you the ability to reduce network transfer costs and database load, and can significantly outperform repeated calls to cursor.execute().

SQL

To help demonstrate batch execution, the following tables and data will be used:

create table ParentTable ( ParentId number(9) not null, Description varchar2(60) not null, constraint ParentTable_pk primary key (ParentId) ); create table ChildTable ( ChildId number(9) not null, ParentId number(9) not null, Description varchar2(60) not null, constraint ChildTable_pk primary key (ChildId), constraint ChildTable_fk foreign key (ParentId) references ParentTable ); insert into ParentTable values (10, 'Parent 10'); insert into ParentTable values (20, 'Parent 20'); insert into ParentTable values (30, 'Parent 30'); insert into ParentTable values (40, 'Parent 40'); insert into ParentTable values (50, 'Parent 00'); insert into ChildTable values (1001, 10, 'Child A of Parent 10'); insert into ChildTable values (1002, 20, 'Child A of Parent 20'); insert into ChildTable values (1003, 20, 'Child B of Parent 20'); insert into ChildTable values (1004, 20, 'Child C of Parent 20'); insert into ChildTable values (1005, 30, 'Child A of Parent 30'); insert into ChildTable values (1006, 30, 'Child B of Parent 30'); insert into ChildTable values (1007, 40, 'Child A of Parent 40'); insert into ChildTable values (1008, 40, 'Child B of Parent 40'); insert into ChildTable values (1009, 40, 'Child C of Parent 40'); insert into ChildTable values (1010, 40, 'Child D of Parent 40'); insert into ChildTable values (1011, 40, 'Child E of Parent 40'); insert into ChildTable values (1012, 50, 'Child A of Parent 50'); insert into ChildTable values (1013, 50, 'Child B of Parent 50'); insert into ChildTable values (1014, 50, 'Child C of Parent 50'); insert into ChildTable values (1015, 50, 'Child D of Parent 50'); commit; Simple Execution

To insert a number of rows into the parent table, the following naive Python script could be used:

data = [ (60, "Parent 60"), (70, "Parent 70"), (80, "Parent 80"), (90, "Parent 90"), (100, "Parent 100") ] for row in data: cursor.execute(""" insert into ParentTable (ParentId, Description) values (:1, :2)""", row)

This works as expected and five rows are inserted into the table. Each execution, however, requires a "round-trip" to the database. A round-trip is defined as the client (i.e. the Python script) making a request to the database and the database sending back its response to the client. In this case, five round-trips are required. As the number of executions increases, the cost increases in a linear fashion based on the average round-trip cost. This cost is dependent on the configuration of the network between the client (Python script) and the database server, as well as on the capability of the database server.

Batch Execution

Performing the same inserts using executemany() would be done as follows:

data = [ (60, "Parent 60"), (70, "Parent 70"), (80, "Parent 80"), (90, "Parent 90"), (100, "Parent 100") ] cursor.executemany(""" insert into ParentTable (ParentId, Description) values (:1, :2)""", data)

In this case there is only one round-trip to the database, not five. In fact, no matter how many rows are processed at the same time there will always be just one round-trip. As the number of rows processed increases, the performance advantage of cursor.executemany() skyrockets. For example, on my machine inserting 1,000 rows into the same table in a database on the local network using cursor.execute() takes 410 ms, whereas using cursor.executemany() requires only 20 ms. Increasing the number to 10,000 rows requires 4,000 ms for cursor.execute() but only 60 ms for cursor.executemany()!

For really huge data sets there may be external buffer or network limits to how many rows can be processed at one time. These limits are based on both the number of rows being processed as well as the "size" of each row that is being processed. The sweet spot can be found by tuning your application. Repeated calls to executemany() are still better than repeated calls to execute().

As mentioned earlier, execution of PL/SQL statements is also possible. Here is a brief example demonstrating how to do so:

data = [[2], [6], [4]] var = cursor.var(str, arraysize = len(data)) data[0].append(var) # OUT bind variable ':2' cursor.executemany(""" declare t_Num number := :1; t_OutValue varchar2(100); begin for i in 1..t_Num loop t_OutValue := t_OutValue || 'X'; end loop; :2 := t_OutValue; end;""", data) print("Result:", var.values)

This results in the following output:

Result: ['XX', 'XXXXXX', 'XXXX'] Using executemany()

With the significant performance advantages that can be seen by performing batch execution of a single statement it would seem obvious to use cursor.executemany() whenever possible. Let's look at some of the other features of executemany() useful for common data handling scenarios.

Scenario 1: Getting Affected Row Counts

One scenario that may arise is the need to determine how many rows are affected by each row of data that is passed to cursor.executemany(). Consider this example:

for parentId in (10, 20, 30): cursor.execute("delete from ChildTable where ParentId = :1", [parentId]) print("Rows deleted for parent id", parentId, "are", cursor.rowcount)

This results in the following output:

Rows deleted for parent id 10 are 1 Rows deleted for parent id 20 are 3 Rows deleted for parent id 30 are 2

Since each delete is performed independently, determining how many rows are affected by each delete is easy to do. But what happens if we use cursor.executemany() in order to improve performance as in the following rewrite?

data = [[10], [20], [30]] cursor.executemany("delete from ChildTable where ParentId = :1", data) print("Rows deleted:", cursor.rowcount)

This results in the following output:

Rows deleted: 6

You'll note this is the sum of all of the rows that were deleted in the prior example, but the information on how many rows were deleted for each parent id is missing. Fortunately, that can be determined by enabling the Array DML Row Counts feature, available in Oracle Database 12.1 and higher:

data = [[10], [20], [30]] cursor.executemany("delete from ChildTable where ParentId = :1", data, arraydmlrowcounts = True) for ix, rowsDeleted in enumerate(cursor.getarraydmlrowcounts()): print("Rows deleted for parent id", data[ix][0], "are", rowsDeleted)

This results in the same output as was shown for the simple cursor.execute():

Rows deleted for parent id 10 are 1 Rows deleted for parent id 20 are 3 Rows deleted for parent id 30 are 2 Scenario 2: Handling Bad Data

Another scenario is handling bad data. When processing large amounts of data some of that data may not fit the constraints imposed by the database. Using cursor.execute() such processing may look like this:

data = [ (1016, 10, 'Child B of Parent 10'), (1017, 10, 'Child C of Parent 10'), (1018, 20, 'Child D of Parent 20'), (1018, 20, 'Child D of Parent 20'), # duplicate key (1019, 30, 'Child C of Parent 30'), (1020, 30, 'Child D of Parent 40'), (1021, 600, 'Child A of Parent 600'), # parent does not exist (1022, 40, 'Child F of Parent 40'), ] for ix, row in enumerate(data): try: cursor.execute(""" insert into ChildTable (ChildId, ParentId, Description) values (:1, :2, :3)""", row) except cx_Oracle.DatabaseError as e: print("Row", ix, "has error", e)

This results in the following output:

Row 3 has error ORA-00001: unique constraint (EMDEMO.CHILDTABLE_PK) violated Row 6 has error ORA-02291: integrity constraint (EMDEMO.CHILDTABLE_FK) violated - parent key not found

If you make use of cursor.executemany(), however, execution stops at the first error that is encountered:

data = [ (1016, 10, 'Child B of Parent 10'), (1017, 10, 'Child C of Parent 10'), (1018, 20, 'Child D of Parent 20'), (1018, 20, 'Child D of Parent 20'), # duplicate key (1019, 30, 'Child C of Parent 30'), (1020, 30, 'Child D of Parent 40'), (1021, 600, 'Child A of Parent 600'), # parent does not exist (1022, 40, 'Child F of Parent 40'), ] try: cursor.executemany(""" insert into ChildTable (ChildId, ParentId, Description) values (:1, :2, :3)""", data) except cx_Oracle.DatabaseError as e: errorObj, = e.args print("Row", cursor.rowcount, "has error", errorObj.message)

This results in the following output:

Row 3 has error ORA-00001: unique constraint (EMDEMO.CHILDTABLE_PK) violated

Fortunately there is an option to help here as well, using the Batch Errors feature available in Oracle Database 12.1 and higher. This can be seen using the following code:

data = [ (1016, 10, 'Child B of Parent 10'), (1017, 10, 'Child C of Parent 10'), (1018, 20, 'Child D of Parent 20'), (1018, 20, 'Child D of Parent 20'), # duplicate key (1019, 30, 'Child C of Parent 30'), (1020, 30, 'Child D of Parent 40'), (1021, 600, 'Child A of Parent 600'), # parent does not exist (1022, 40, 'Child F of Parent 40'), ] cursor.executemany(""" insert into ChildTable (ChildId, ParentId, Description) values (:1, :2, :3)""", data, batcherrors = True) for errorObj in cursor.getbatcherrors(): print("Row", errorObj.offset, "has error", errorObj.message)

This results in the following output, which is identical to the example that used cursor.execute():

Row 3 has error ORA-00001: unique constraint (EMDEMO.CHILDTABLE_PK) violated Row 6 has error ORA-02291: integrity constraint (EMDEMO.CHILDTABLE_FK) violated - parent key not found

In both the execute() and executemany() cases, rows that were inserted successfully open a transaction which will need to be either committed or rolled back with connection.commit() or connection.rollback(), depending on the needs of your application. Note that if you use autocommit mode, the transaction is committed only when no errors are returned; otherwise, a transaction is left open and will need to be explicitly committed or rolled back.

Scenario 3: DML RETURNING Statements

The third scenario that I will consider is that of DML RETURNING statements. These statements allow you to bundle a DML statement (such as INSERT, UPDATE, DELETE and MERGE statements) along with a query to return some data at the same time. With cursor.execute() this is done easily enough using something like the following code:

childIdVar = cursor.var(int) cursor.setinputsizes(None, childIdVar) for parentId in (10, 20, 30): cursor.execute(""" delete from ChildTable where ParentId = :1 returning ChildId into :2""", [parentId]) print("Child ids deleted for parent id", parentId, "are", childIdVar.values)

This produces the following output:

Child ids deleted for parent id 10 are [1001] Child ids deleted for parent id 20 are [1002, 1003, 1004] Child ids deleted for parent id 30 are [1005, 1006]

Support for DML RETURNING in cursor.executemany() was introduced in cx_Oracle 6.3. Because it was supported only in execute() prior to cx_Oracle 6.3, the cx_Oracle.__future__ object must have the attribute "dml_ret_array_val" set to True to allow multiple values to be returned by executemany(). Failing to set this to True when calling executemany() will result in an error. Finally, the variable created to accept the returned values must have an array size large enough to accept the rows that are returned (one array of output data is returned for each of the input records that are provided).

The following code shows the new executemany() support in cx_Oracle 6.3:

cx_Oracle.__future__.dml_ret_array_val = True data = [[10], [20], [30]] childIdVar = cursor.var(int, arraysize = len(data)) cursor.setinputsizes(None, childIdVar) cursor.executemany(""" delete from ChildTable where ParentId = :1 returning ChildId into :2""", data) for ix, inputRow in enumerate(data): print("Child ids deleted for parent id", inputRow[0], "are", childIdVar.getvalue(ix))

This results in the same output as was seen with cursor.execute():

Child ids deleted for parent id 10 are [1001] Child ids deleted for parent id 20 are [1002, 1003, 1004] Child ids deleted for parent id 30 are [1005, 1006]

Note: that using "dml_ret_array_val" set to True with execute() causes arrays to be returned for each bind record. In any future cx_Oracle 7 this will become the only behavior available.

Scenario 4: Variable Data lengths

When multiple rows of data are being processed there is the possibility that the data is not uniform in type and size. cx_Oracle makes some effort to accommodate such differences. For example, type determination is deferred until a value that is not None is found in the data. If all values in a particular column are None, then cx_Oracle assumes the type is a string and has a length of 1. cx_Oracle will also adjust the size of the buffers used to store strings and bytes when a longer value is encountered in the data. These sorts of operations, however, will incur overhead as cx_Oracle has to reallocate memory and copy all of the data that has been processed thus far. To eliminate this overhead, the method cursor.setinputsizes() should be used to tell cx_Oracle about the type and size of the data that is going to be used. For example:

data = [ (110, "Parent 110"), (2000, "Parent 2000"), (30000, "Parent 30000"), (400000, "Parent 400000"), (5000000, "Parent 5000000") ] cursor.setinputsizes(None, 20) cursor.executemany(""" insert into ParentTable (ParentId, Description) values (:1, :2)""", data)

In this example, without the call to cursor.setinputsizes(), cx_Oracle would perform five allocations of increasing size as it discovered each new, larger string. The value 20, however, tells cx_Oracle that the maximum size of the strings that will be processed is 20 characters. Since cx_Oracle allocates memory for each row based on this value it is best not to oversize it. Note that if the type and size are uniform (like they are for the first column in the data being inserted), the type does not need to be specified and None can be provided, indicating that the default type (in this case cx_Oracle.NUMBER) should be used.

Conclusion

As can be seen by the preceding examples, cursor.executemany() lets you manage data easily and enjoy high performance at the same time!

Python cx_Oracle 6.3 Supports DML RETURNING for Batch Statement Execution

Thu, 2018-04-26 23:26

cx_Oracle logo

cx_Oracle 6.3, the extremely popular Oracle Database interface for Python, is now Production on PyPI.

cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features.

Top Feature: Cursor.executemany() support for OUT bind variables in DML RETURNING statements.

 

This release contains a number of bug fixes and useful improvements. For the full list, see the Release Notes, but I wanted to highlight a few features:

  • Support for binding integers and floats as cx_Oracle.NATIVE_FLOAT.

  • Support for true heterogeneous session pools that use different username/password combinations for each session acquired from the pool.

  • All cx_Oracle exceptions raised by cx_Oracle now produce a cx_Oracle._Error object.

  • Support for getting the OUT values of bind variables bound to a DML RETURNING statement when calling Cursor.executemany(). For technical reasons, this requires setting a new attribute in cx_Oracle.__future__. As an example:

    cx_Oracle.__future__.dml_ret_array_val = True data = [[10], [20], [30]] childIdVar = cursor.var(int, arraysize = len(data)) cursor.setinputsizes(None, childIdVar) cursor.executemany(""" delete from ChildTable where ParentId = :1 returning ChildId into :2""", data) for ix, inputRow in enumerate(data): print("Child ids deleted for parent id", inputRow[0], "are", childIdVar.getvalue(ix))

    Want to know what this displays? Stay tuned to this blog site for an upcoming post on using executemany() in cx_Oracle!

cx_Oracle References

Home page: oracle.github.io/python-cx_Oracle/index.html

Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html

Documentation: cx-oracle.readthedocs.io/en/latest/index.html

Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html

Source Code Repository: github.com/oracle/python-cx_Oracle

ODPI-C 2.3.1 is now on GitHub

Wed, 2018-04-25 17:36
ODPI-C logo

Release 2.3.1 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub

ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++.

 

 

Today a minor patch update release of ODPI-C was pushed to GitHub. Check the Release Notes for details on the handful of fixes that landed.

ODPI-C References

Home page: https://oracle.github.io/odpi/

Code: https://github.com/oracle/odpi

Documentation: https://oracle.github.io/odpi/doc/index.html

Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html

Installation Instructions: oracle.github.io/odpi/doc/installation.html

Report issues and discuss: https://github.com/oracle/odpi/issues

ODPI-C 2.3 is now on GitHub

Mon, 2018-04-02 21:33
ODPI-C logo

Release 2.3 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub

ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++.

Top features: Improve Batch Statement Execution

 

ODPI-C 2.3 improves support for Batch Statement execution with dpiStmt_executeMany(). To support DML RETURNING producing multiple rows for each iteration, a new function dpiVar_getReturnedData() was added, replacing the function dpiVar_getData() which will be deprecated in a future release. A fix for binding LONG data in dpiStmt_executeMany() also landed.

If you haven't heard of Batch Statement Executation (sometimes referred to as Array DML), check out this Python cx_Oracle example or this Node.js node-oracledb example.

A number of other issues were addressed in ODPI-C 2.3. See the release notes for more information.

ODPI-C References

Home page: https://oracle.github.io/odpi/

Code: https://github.com/oracle/odpi

Documentation: https://oracle.github.io/odpi/doc/index.html

Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html

Installation Instructions: oracle.github.io/odpi/doc/installation.html

Report issues and discuss: https://github.com/oracle/odpi/issues

Node-oracledb 2.2 with Batch Statement Execution (and more) is out on npm

Mon, 2018-04-02 17:16

Release announcement: Node-oracledb 2.2, the Node.js module for accessing Oracle Database, is on npm.

Top features: Batch Statement Execution

In the six-or-so weeks since 2.1 was released, a bunch of new functionality landed in node-oracledb 2.2. This shows how much engineering went into the refactored lower abstraction layer we introduced in 2.0, just to make it easy to expose Oracle features to languages like Node.js.

The top features in node-oracledb 2.2 are:

  • Added oracledb.edition to support Edition-Based Redefinition (EBR). The EBR feature of Oracle Database allows multiple versions of views, synonyms, PL/SQL objects and SQL Translation profiles to be used concurrently. This lets database logic be updated and tested while production users are still accessing the original version.

    The new edition property can be set at the global level, when creating a pool, or when creating a standalone connection. This removes the need to use an ALTER SESSION command or ORA_EDITION environment variable.

  • Added oracledb.events to allow the Oracle client library to receive Oracle Database service events, such as for Fast Application Notification (FAN) and Runtime Load Balancing (RLB).

    The new events property can be set at the global level, when creating a pool, or when creating a standalone connection. This removes the need to use an oraaccess.xml file to enable event handling, making it easier to use Oracle high availablility features, and makes it available for the first time to users who are linking node-oracledb with version 11.2 Oracle client libraries.

  • Added connection.changePassword() for changing passwords. Passwords can also be changed when calling oracledb.getConnection(), which is the only way to connect when a password has expired.

  • Added connection.executeMany() for efficient batch execution of DML (e.g. INSERT, UPDATE and DELETE) and PL/SQL execution with multiple records. See the example below.

  • Added connection.getStatementInfo() to find information about a SQL statement without executing it. This is most useful for finding column types of queries and for finding bind variables names. It does require a 'round-trip' to the database, so don't use it without reason. Also there are one or two quirks because the library underneath that provides the implementation has some 'historic' behavior. Check the manual for details.

  • Added connection.ping() to support system health checks. This verifies that a connection is usable and that the database service or network have not gone down. This requires a round-trip to the database so you wouldn't use it without reason. Although it doesn't replace error handling in execute(), sometimes you don't want to be running a SQL statement just to check the connection status, so it is useful in the arsenal of features for keeping systems running reliably.

See the CHANGELOG for all changes.

One infrastructure change we recently made was to move the canonical home for documentation to GitHub 'pages'. This will be kept in sync with the current production version of node-oracledb. If you update your bookmarks to the new locations, it will allow us to update the source code repository documentation mid-release without confusing anyone about available functionality.

Batch Statement Execution

The new connection.executeMany() method allows many sets of data values to be bound to one DML or PL/SQL statement for execution. It is like calling connection.execute() multiple times for one statement but requires fewer round-trips overall. This is an efficient way to handle batch changes, for example when inserting or updating multiple rows, because the reduced cost of round-trips has a significant affect on performance and scalability. Depending on the number of records, their sizes, and on the network speed to the database, the performance of executeMany() can be significantly faster than the equivalent use of execute().

In one little test I did between Node.js on my laptop and a database running on my adjacent desktop, I saw that executeMany() took 16 milliseconds whereas execute() took 2.3 seconds to insert 1000 rows, each consisting of a number and a very short string. With larger data sizes and slower (or faster!) networks the performance characteristics will vary, but the overall benefit is widespread.

The executeMany() method supports IN, IN OUT and OUT variables. Binds from RETURNING INTO clauses are supported, making it easy to insert a number of rows and find, for example, the ROWIDs of each.

With an optional batchErrors mode, you can insert 'noisy' data easily. Batch Errors allows valid rows to be inserted and invalid rows to be rejected. A transaction will be started but not committed, even if autocommit mode is enabled. The application can examine the errors, find the bad data, take action, and explicitly commit or rollback as desired.

To give one example, let's look at the use of batchErrors when inserting data:

var sql = "INSERT INTO childtab VALUES (:1, :2, :3)"; // There are three value in each nested array since there are // three bind variables in the SQL statement. // Each nested array will be inserted as a new row. var binds = [ [1016, 10, "apples"], [1017, 10, "bananas"], [1018, 20, "cherries"], [1018, 20, "damson plums"], // duplicate key [1019, 30, "elderberry"], [1020, 40, "fig"], [1021, 75, "golden kiwifruit"], // parent does not exist [1022, 40, "honeydew melon"] ]; var options = { autoCommit: true, // autocommit if there are no batch errors batchErrors: true, // identify invalid records; start a transaction for valid ones bindDefs: [ // describes the data in 'binds' { type: oracledb.NUMBER }, { type: oracledb.NUMBER }, { type: oracledb.STRING, maxSize: 16 } // size of the largest string, or as close as possible ] }; connection.executeMany(sql, binds, options, function (err, result) { if (err) consol.error(err); else { console.log("Result is:", result); } });

Assuming appropriate data exists in the parent table, the output might be like:

Result is: { rowsAffected: 6, batchErrors: [ { Error: ORA-00001: unique constraint (CJ.CHILDTAB_PK) violated errorNum: 1, offset: 3 }, { Error: ORA-02291: integrity constraint (CJ.CHILDTAB_FK) violated - parent key not found errorNum: 2291, offset: 6 } ] }

This shows that 6 records were inserted but the records at offset 3 and 6 (using a 0-based index into the 'binds' variable array) were problematic. Because of these batch errors, the other records were not committed, despite autoCommit being true. However they were inserted and the transaction could be committed or rolled back.

We know some users are inserting very large data sets so executeMany() will be very welcome. At the very huge end of the data spectrum you may want to call executeMany() with batches of data to avoid size limitations in various layers of the Oracle and operating system stack. Your own testing will determine the best approach.

See Batch Execution in the manual for more information about the modes of executeMany() and how to use it in various cases. There are runnable examples in the GitHub examples directory. Look for the files prefixed 'em_'. There are two variants of each sample: one uses call-back style, and the other uses the Async/Await interface available with Node.js 8.

Resources

Node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Node-oracledb change log is here.

Issues and questions about node-oracledb can be posted on GitHub.

Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Python cx_Oracle questions? Ask us live online at March 13 at 20:00-20:30 UTC

Mon, 2018-03-12 18:12

Join me, @AnthonyTuininga, and @OraBlaineOS for the first Python and cx_Oracle monthly 'Office Hours' session tomorrow, March 13 at 20:00-20:30 UTC. The theme for this month is 'Connections' but we're open to any other cx_Oracle questions. Join the video feed or go audio-only. All the details are on the Python and Oracle Database Office Hours page.  

Python cx_Oracle 6.2 is out on PyPI

Mon, 2018-03-05 19:01

cx_Oracle logo

cx_Oracle 6.2, the extremely popular Oracle Database interface for Python, is now Production on PyPI.

cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features.



This release:

  • Adds support for creating temporary CLOBs, BLOBs or NCLOBs via a new method Connection.createlob().

  • Adds support for binding a LOB value directly to a cursor.

  • Adds support for closing the connection when reaching the end of a 'with' code block controlled by the connection as a context manager. See cx_Oracle.__future__ for more information.

  • Was internally updated to the newest ODPI-C data access layer, which brings numerous stability fixes and code improvements including:

    • Open statements and LOBs are tracked and automatically closed when the related connection is closed; this eliminates the need for users of cx_Oracle to track them, and removes the error "DPI-1054: connection cannot be closed when open statements or LOBs exist".

    • Errors during implicit rollback at connection close are ignored - but if an error does occur, ensure the connection is dropped from the connection pool. This reduces app errors in cases like where a DBA has killed a session.

    • Avoids an unnecessary round trip to the database when a connection is released back to the pool by preventing a rollback from being called when no transaction is in progress.

  • There was also an internal code restructure to simplify maintenance and consolidate transformations to/from Python objects.

See the Release Notes for all the fixes.

To upgrade to cx_Oracle 6.2 most users will be able to run:

python -m pip install cx_Oracle --upgrade

Spread the word!

cx_Oracle References

Home page: oracle.github.io/python-cx_Oracle/index.html

Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html

Documentation: cx-oracle.readthedocs.io/en/latest/index.html

Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html

Source Code Repository: github.com/oracle/python-cx_Oracle

ODPI-C 2.2 Release: Powering Oracle Database Access

Mon, 2018-03-05 16:24

ODPI-C 2.2.1 has been tagged for release.

Oracle Database Programming Interface for C (ODPI-C) is an open source library of C code that simplifies the use of common Oracle Call Interface (OCI) features for Oracle Database drivers and user applications. The ODPI-C project is open source and maintained by Oracle Corp.

ODPI-C is used as a data access layer in drivers for Node.js, Python, Ruby, Go, Rust, Haskell and more.

Changes in ODPI-C 2.2 from 2.1 include:

  • Open statements and LOBs are tracked and automatically closed when the related connection is closed; this eliminates the need for users of the driver to do so and removes the error "DPI-1054: connection cannot be closed when open statements or LOBs exist".

  • Errors during implicit rollback at connection close are ignored - but if an error does occur, ensure the connection is dropped from the connection pool. This reduces app errors in cases like where a DBA has killed a session.

  • Avoid a round trip to the database when a connection is released back to the pool by preventing a rollback from being called when there is no transaction in progress.
  • A new, optional, way of including the source code in your projects: embed/dpi.c was added. This simply includes all other source files. You can reliably link with just dpi.c and not have to update your projects if, and when, new ODPI-C versions have new source files.

  • Many stability fixes, code improvements, new tests, and documentation updates.

See the release notes for all changes.

In my opinion, the stability fixes justify upgrading immediately.

The eagle-eyed will note that today is a 2.2.1 release but we actually tagged 2.2.0 a few weeks ago. ODPI-C 2.2.0 was tagged solely to give an identifiable base for node-oracledb 2.2 to use. However Anthony had some ODPI-C fixes queued up in areas of code not used by node-oracledb, hence today's "official" ODPI-C 2.2.1 announcement.

ODPI-C References

Home page: oracle.github.io/odpi

Code: github.com/oracle/odpi

Documentation: oracle.github.io/odpi/doc/index.html

Release Notes: oracle.github.io/odpi/doc/releasenotes.html

Report issues and discuss: github.com/oracle/odpi/issues

Installation Instructions: oracle.github.io/odpi/doc/installation.html.

Installing the Oracle ODBC Driver on macOS

Thu, 2018-02-22 23:25

A bonus for today is a guest post by my colleague Senthil Dhamotharan. He shares the steps to install the Oracle Instant Client ODBC driver and the unixODBC Driver Manager on macOS.

ODBC is an open specification for accessing databases. The Oracle ODBC driver for Oracle Database enables ODBC applications to connect to Oracle Database. In addition to standard ODBC functions, users can leverage Oracle specific features for high performance data access.

Install the unixODBC Driver Manager
  • Download unixODBC from ftp.unixodbc.org/pub/unixODBC. I used unixODBC-2.3.1.tar.gz.

  • Extract the package:

    tar -zxvf unixODBC-2.3.1.tar.gz
  • Configure unixODBC:

    cd unixODBC-2.3.1 ./configure

    Note if you use the configure option "--prefix" to install into locations other than the default directory (/usr/local) then macOS's SIP features may prevent the unixODBC libraries being located correctly by the ODBC driver.

  • Build and install unixODBC:

    make sudo make install
Install the Oracle ODBC Driver
  • Download the Oracle 12.2 Instant Client Basic and ODBC packages from Instant Client Downloads for macOS (Intel x86).

    To reduce the installation size, the Basic Light package be used instead of Basic, if its character sets and languages are sufficient.

  • Extract both ZIP files:

    unzip instantclient-basic-macos.x64-12.2.0.1.0-2.zip unzip instantclient-odbc-macos.x64-12.2.0.1.0-2.zip

    This will create a subdirectory instantclient_12_2

  • The Oracle Instant Client libraries need to be in the macOS library search path, generally either in /usr/lib/local or in your home directory under ~/lib. I did:

    mkdir ~/lib cd instantclient_12_2 ln -s $(pwd)/libclntsh.dylib.12.1 $(pwd)/libclntshcore.dylib.12.1 ~/lib
  • With version 12.2, a small patch to the driver name in instantclient_12_2/odbc_update_ini.sh is required on macOS. I changed line 101 from:

    SO_NAME=libsqora.so.12.1

    to

    SO_NAME=libsqora.dylib.12.1
  • Run the configuration script

    cd instantclient_12_2 sudo odbc_update_ini.sh /usr/local sudo chown $USER ~/.odbc.ini

    This creates a default DSN of "OracleODBC-12c"

  • Edit the new ~/.odbc.ini configuration file and add the Oracle Database connection string. My database is running on the same machine as ODBC (inside a VirtualBox VM) and has a service name of 'orclpdb', so my connection string is 'localhost/orclpdb'. I changed:

    ServerName =

    to

    ServerName = localhost/orclpdb
Verify the installation

Run the isql utility to verify installation. Pass in the DSN name, and an existing database username and password:

$ isql OracleODBC-12c scott tiger +---------------------------------------+ | Connected! | | | | sql-statement | | help [tablename] | | quit | | | +---------------------------------------+ SQL>

You can execute SQL statements and quit when you are done.

Test Program

To test a program that makes ODBC calls, download odbcdemo.c.

  • Edit odbcdemo.c and set the USERNAME and PASSWORD constants to the database credentials.

  • Build it:

    gcc -o odbcdemo -g -lodbc odbcdemo.c
  • Run it

    ./odbcdemo

The output will be like:

Connecting to the DB .. Done Executing SQL ==> SELECT SYSDATE FROM DUAL Result ==> 2018-02-21 02:53:47 Summary

ODBC is a popular API for accessing databases. The Oracle ODBC Driver is the best way to access Oracle Database.

Resources

Using the Oracle ODBC Driver.

Oracle ODBC Drivers

Discussion Forum

Oracle Instant Client ODBC Release Notes

Instant Client Downloads

Installing XAMPP for PHP and Oracle Database

Thu, 2018-02-22 22:19

Today's guest post comes from Tianfang Yang who's been working with the Oracle Database extensions for PHP.

This post shows how to install XAMPP on Windows to run PHP applications that connect to a remote Oracle Database.

XAMPP is an open source package that contains Apache, PHP and many PHP 'extensions'. One of these extension is PHP OCI8 which connects to Oracle Database.

To install XAMPP:

  1. Download "XAMPP for Windows" and follow the installer wizard. I installed into my D: drive.

  2. Start the Apache server via the XAMPP control panel.


    screenshot of XAMPP control panel
  3. Visit http://localhost/dashboard/phpinfo.php via your browser to see the architecture and thread safety mode of the installed PHP. Please note this is the architecture of the installed PHP and not the architecture of your machine. It’s possible to run a x86 PHP on an x64 machine.


    screenshot of PHP configuration showing the PHP OS architecture as x86
  4. [Optional] Oracle OCI8 is pre-installed in XAMPP but if you need a newer version you can download an updated OCI8 PECL package from pecl.php.net. Pick an OCI8 release and select the DLL according to the architecture and thread safety mode. For example, if PHP is x86 and thread safety enabled, download "7.2 Thread Safe (TS) x86". Then replace "D:\xampp\php\ext\php_oci8_12c.dll" with the new "php_oci8_12c.dll" from the OCI8 PECL package.


    screenshot of PECL OCI8 download page

  5. Edit "D:\xampp\php\php.ini" and uncomment the line "extension=oci8_12c". Make sure "extension_dir" is set to the directory containing the PHP extension DLLs. For example,

    extension=oci8_12c extension_dir="D:\xampp\php\ext"
  6. Download the Oracle Instant Client Basic package from OTN.

    Select the correct architecture to align with PHP's. For Windows x86 download "instantclient-basic-nt-12.2.0.1.0.zip" from the Windows 32-bit page.


    screenshot of Oracle Instant Client download page
  7. Extract the file in a directory such as "D:\Oracle". A subdirectory "D:\Oracle\instantclient_12_2" will be created.

    Add this subdirectory to the PATH environment variable. You can update PATH in Control Panel -> System -> Advanced System Settings -> Advanced -> Environment Variables -> System Variables -> PATH. In my example I set it to "D:\Oracle\instantclient_12_2".

  8. Restart the Apache server and check the phpinfo.php page again. It shows the OCI8 extension is loaded successfully.


    screenshot of PHP configuration page showing a section for OCI8

    If you also run PHP from a terminal window, make sure to close and reopen the terminal to get the updated PATH value.

  9. To run your first OCI8 application, create a new file in the XAMPP document root "D:\xampp\htdocs\test.php". It should contain:

    <?php error_reporting(E_ALL); ini_set('display_errors', 'On'); $username = "hr"; // Use your username $password = "welcome"; // and your password $database = "localhost/orclpdb"; // and the connect string to connect to your database $query = "select * from dual"; $c = oci_connect($username, $password, $database); if (!$c) { $m = oci_error(); trigger_error('Could not connect to database: '. $m['message'], E_USER_ERROR); } $s = oci_parse($c, $query); if (!$s) { $m = oci_error($c); trigger_error('Could not parse statement: '. $m['message'], E_USER_ERROR); } $r = oci_execute($s); if (!$r) { $m = oci_error($s); trigger_error('Could not execute statement: '. $m['message'], E_USER_ERROR); } echo "<table border='1'>\n"; $ncols = oci_num_fields($s); echo "<tr>\n"; for ($i = 1; $i <= $ncols; ++$i) { $colname = oci_field_name($s, $i); echo " <th><b>".htmlspecialchars($colname,ENT_QUOTES|ENT_SUBSTITUTE)."</b></th>\n"; } echo "</tr>\n"; while (($row = oci_fetch_array($s, OCI_ASSOC+OCI_RETURN_NULLS)) != false) { echo "<tr>\n"; foreach ($row as $item) { echo "<td>"; echo $item!==null?htmlspecialchars($item, ENT_QUOTES|ENT_SUBSTITUTE):"&nbsp;"; echo "</td>\n"; } echo "</tr>\n"; } echo "</table>\n"; ?>

    You need to edit this file and set your database username, password and connect string. If you are using Oracle Database XE, then the connect string should be "localhost/XE".

    The SQL query can also be changed. Currently it queries the special DUAL table, which every user has.

  10. Load the test program in a browser using http://localhost/test.php. The output will be the single value "X" in the column called "DUMMY".


You can read more about PHP OCI8 in the PHP manual, and in the free Underground PHP and Oracle Manual from Oracle.

Enjoy your coding with OCI8!

Pages