I’m very pleased to announce that the Call for Papers for the Rittman Mead BI Forum 2015 is now open, with abstract submissions open to January 18th 2015. As in previous years the BI Forum will run over consecutive weeks in Brighton, UK and Atlanta, GA, with the provisional dates and venues as below:
- Brighton, UK : Hotel Seattle, Brighton, UK : May 6th – 8th 2015
- Atlanta, GA : Renaissance Atlanta Midtown Hotel, Atlanta, USA : May 13th-15th 2015
Now on it’s seventh year, the Rittman Mead BI Forum is the only conference dedicated entirely to Oracle Business Intelligence, Oracle Business Analytics and the technologies and processes that support it – data warehousing, data analysis, data visualisation, big data and OLAP analysis. We’re looking for session around tips & techniques, project case-studies and success stories, and sessions where you’ve taken Oracle’s BI products and used them in new and innovative ways. Each year we select around eight-to-ten speakers for each event along with keynote speakers and a masterclass session, with speaker choices driven by attendee votes at the end of January, and editorial input from myself, Jon Mead and Charles Elliott and Jordan Meyer.
Last year we had a big focus on cloud, and a masterclass and several sessions on bringing Hadoop and big data to the world of OBIEE. This year we’re interested in project stories and experiences around cloud and Hadoop, and we’re keen to hear about any Oracle BI Apps 11g implementations or migrations from the earlier 7.9.x releases. Getting back to basics we’re always interested in sessions around OBIEE, Essbase and data warehouse data modelling, and we’d particularly like to encourage session abstracts on data visualization, BI project methodologies and the incorporation of unstructured, semi-structured and external (public) data sources into your BI dashboards. For an idea of the types of presentations that have been selected in the past, check out the BI Forum 2014, 2013 and 2012 homepages, or feel free to get in touch via email at firstname.lastname@example.org.
The Call for Papers entry form is here, and we’re looking for speakers for Brighton, Atlanta, or both venues if you can speak at both. All session this year will be 45 minutes long, all we’ll be publishing submissions and inviting potential attendees to vote on their favourite sessions towards the end of January. Other than that – have a think about abstract ideas now, and make sure you get them in by January 18th 2015.
When upgrading the Oracle E-Business Suite database to Oracle Database 12c (12.1), there are a number of security considerations and steps that should be included in the upgrade procedure. Oracle Support Note ID 1524398.1 Interoperability Notes EBS 12.0 or 12.1 with RDBMS 12cR1 details the upgrade steps. Here, we will document steps that should be included or modified to improve database security. All references to steps are the steps in Note ID 1524398.1.Step 8
"While not mandatory for the interoperability of Oracle E-Business Suite with the Oracle Database, customers may choose to apply Database Patch Set Updates (PSU) on their Oracle E-Business Suite Database ...".
After any database upgrade, the latest CPU patch (either PSU or SPU) should always be applied. The database upgrade only has the latest CPU patch available at the time of release of the database upgrade patch. In the case of 126.96.36.199, the database upgrade will be current as of July 2013 and be missing the latest five CPU patches. Database upgrade patches reset the CPU level - so even if you had applied the latest CPU patch prior to the upgrade, the upgrade will revert the CPU patch level to July 2013.
From a security perspective, the latest PSU patch should be considered mandatory.Step 11
It is important to note from a security perspective that Database Vault must be disable during the upgrade process. Any protections enabled in Database Vault intended for DBAs will be disabled during the upgrade.Step 15
The DMSYS schema is no longer used with Oracle E-Business Suite and can be safely dropped. We recommended you drop the schema as part of this step to reduce the attack surface of the database and remove unused components. Use the following SQL to remove the DMSYS user --
DROP USER DMSYS CASCADE;Step 16
As part of the upgrade, it is a good time to review security related initialization parameters are set correctly. Verify the following parameters are set -
o7_dictionary_accessibility = FALSE audit_trail = <set to a value other than none> sec_case_sensitive_logon = TRUE (patch 12964564 may have to be applied)Step 20
For Oracle E-Business Suite 12.1, the sqlnet_ifile.ora should contain the following parameter to correspond with the initialization parameter sec_case_sensitive_login = true -
SQLNET.ALLOWED_LOGON_VERSION_SERVER = 10
Tags: Oracle E-Business SuiteDBA
Back in the early 90s I ventured into virtual reality and was sick for a whole day afterwards.
We have since learned that people become queazy when their visual systems and vestibular systems get out of sync. You have to get the visual response lag below a certain threshold. It’s a very challenging technical problem which Oculus now claims to have cracked. With ever more sophisticated algorithms and ever faster processors, I think we can soon put this issue behind us.
Anticipating this, there has recently been a resurgence of interest in VR. Google’s Cardboard project (and Unity SDK for developers) makes it easy for anyone to turn their smartphone into a VR headset just by placing it into a cheap cardboard viewer. VR apps are also popping up for iPhones and 3D side-by-side videos are all over YouTube.
Some of my AppsLab colleagues are starting to experiment with VR again, so I thought I’d join the party. I bought a cheap cardboard viewer at a bookstore. It was a pain to put together, and my iPhone 5S rattles around in it, but it worked well enough to give me a taste.
I downloaded an app called Roller Coaster VR and had a wild ride. I could look all around while riding and even turn 180 degrees to ride backwards! To start the ride I stared intently at a wooden lever until it released the carriage.
My first usability note: between rides it’s easy to get turned around so that the lever is directly behind you. The first time I handed it to my wife she looked right and left but couldn’t find the lever at all. So this is a whole new kind of discoverability issue to think about as a designer.
Despite appearances, my roller coaster ride (and subsequent zombie hunt through a very convincing sewer) is research. We care about VR because it is an emerging interaction that will sooner or later have significant applications in the enterprise. VR is already being used to interact with molecules, tumors, and future buildings, uses cases that really need all three dimensions. We can think of other uses cases as well; Jake suggested training for service technicians (e.g. windmills) and accident re-creation for insurance adjusters.
That said, both Jake and I remain skeptical. There are many problems to work through before new technology like this can be adopted at an enterprise scale. Consider the the idea of immersive virtual meetings. Workers from around the world, in home offices or multiple physical meeting rooms, could instantly meet all together in a single virtual room, chat naturally with each other, pick up subtle facial expressions, and even make presentations appear in mid air at the snap of a finger. This has been a holy grail for decades, and with Oculus being acquired by Facebook you might think the time has finally come.
Not quite yet. There will be many problems to overcome first, not all of them technical. In fact VR headsets may be the easiest part.
A few of the other technical problems:
- Bandwidth. I still can’t even demo simple animations in a web conference because the U.S. internet system is too slow. I could do it in Korea or Sweden or China or Singapore, but not here anytime soon. Immersive VR will require even more bandwidth.
- Cameras. If you want to see every subtle facial expression in the room, you’ll need cameras pointing at every face from every angle (or at least one 360 camera spinning in the center of the table). For those not in the room you’ll need more than just a web cam pointing at someone’s forehead, especially if you want to recreate them as a 3D avatar. (You’ll need better microphones too, which might turn out to be even harder.) This is technically possible now, Hollywood can do it, but it will be awhile before it’s cheap, effortless, and ubiquitous.
- Choreography. Movie directors make it look easy, and even as individuals we’re pretty good about scanning a crowded room and following a conversation. But in a 3-dimensional meeting room full of 3-dimensional people there are many angles to choose from every second. We will expect our virtual eyes to capture at least as much detail as our real eyes that instinctively turn to catch words and expressions before they happen. Even if we accept that any given participant will see a limited subset of what the overall system can see, creating a satisfying immersive presence will require at least some artificial intelligence. There are probably a lot of subtle challenges like this.
And a non-technical problem:
- Privacy. Any virtual meeting which can me transmitted can also be recorded and viewed by others not in the meeting. This includes off-color remarks (now preserved for the ages or at least for future office parties), unflattering camera angles, surreptitious nose picking, etc. We’ve learned from our own research that people *love* the idea of watching other people but are often uncomfortable about being watched themselves. Many people are just plain camera shy – and even less fond of microphones. Some coworkers are uncomfortable with our team’s weekly video conferences. “Glasshole” is now a word in the dictionary – and glassholes sometimes get beaten up.
So for virtual meetings to happen on an enterprise scale, all of the above problems will have to be solved and some of our attitudes will have to change. We’ll have to find the right balance as a society – and the lawyers will have to sign off on it. This may take awhile.
But that doesn’t mean our group won’t keep pushing the envelope (and riding a few virtual roller coasters). We just have to balance our nerdish enthusiasm with a healthy dose of skepticism about the speed of enterprise innovation.
What are your thoughts about the viability of virtual reality in the enterprise? Your comments are always appreciated!Possibly Related Posts:
- Street View Makes Immortals
- Scoring Topper on the Tablet
- Messing around with Glass and Fusion CRM for Kscope 13
- Are Electronics in Flight Dangerous?
- Scariest Ride Ever?
An Interview with Michelle Lapierre (pictured left) from Marriott Rewards conducted by Angela Wells, Oracle Social Product Manager
Have you checked out the best and brightest in marketing? The recent Global Markie Awards honored excellence in marketing across a whole range of marketing categories. We are so happy that Marriott Rewards, an Oracle Social Cloud customer, won the Markie award for Best Social Campaign. The category was based on: 1) Effective use of social marketing as a strategy to build brand awareness or turn customers and prospects into advocates; 2) Social media used in new and interesting ways or as the centerpiece of a successful new program; and 3) Seized social media opportunities and generated proven results.
So I (Angela Wells, pictured right) connected with Michelle Lapierre, Senior Director, Customer Experience and Social Media at Marriott Rewards, to hear about this award-winning campaign and what Marriott Rewards is doing next on social.
Oracle Social: Congratulations on recently winning a Markie Award for Best Social Campaign! Before we dive into the campaign specifics, can you describe your organization’s overall social media strategy? How has that strategy evolved?
Michelle: Marriott Rewards joined Facebook in December 2011 and quickly grew to be the largest and most engaged hotel loyalty brand on Facebook (www.facebook.com/marriottrewards). The continuing mission of Marriott Rewards is to engage target audiences around the world through social media channels in a consistent, authentic and meaningful way. The Marriott Rewards social media philosophy is to keep life at the center of the story, not hotels or programs or deals. We believe that our Facebook fans are not only our fans, but they are a diverse community of travelers, dreamers and storytellers. We see our social media channels as an outlet for their stories, not just our own.
Through an emphasis on compassionate and authentic community management and content development, we seek to engage, inspire and keep our Facebook friends coming back to the page on a regular basis. As such, we engage in extended conversations with our Facebook friends and listen to what they have to say, not just when they’re angry with us, but for all the reasons friends speak with each other.
Oracle Social: Can you tell us more about your award-winning campaign?
Michelle: The “30 Beds In 30 Days” sweepstakes was the second Facebook promotion in which Marriott Rewards gave away 30 Marriott Beds to 30 Facebook fans in 30 days. It’s great how much people love these beds! The sweepstakes was hosted on a Facebook enabled microsite, which was responsively designed for desktop, mobile and tablet users. It also lived on the Marriott Rewards Facebook page as a tab. As with the 2012 sweepstakes, it was co-sponsored by partners ShopMarriott.com and the Marriott Rewards Credit Card by Chase.
Oracle Social: What were your goals for this campaign?
Michelle: Our primary goal was to increase fan acquisition and engagement on Facebook and also to drive traffic to partner sites like Chase Marriott Rewards Credit Card and ShopMarriott.com. Secondarily, we hoped to drive enrollments into the Marriott Rewards program.
Oracle Social: Clearly, this campaign was successful – not just for the award you won, but for the connections you strengthened with your Fans and Rewards numbers. Can you tell us about the results of this campaign?
Michelle: The 2013 “30 Beds in 30 Days” sweepstakes surpassed the results of the 2012 campaign in virtually every way, including Facebook fan acquisition, Rewards program enrollments, and traffic to our partners websites (shopmarriott.com and the Marriott Rewards Credit Card by Chase).
The “30 Beds in 30 Days” sweepstakes increased our Share of Voice compared to our competitors during the campaign. It also generated more positive sentiment around the program in general outside of the contest. According to Oracle Social Cloud’s sentiment analysis, mentions of the Marriott Rewards program and the campaign outside of Marriott channels with a clearly defined sentiment ran 90% positive during the 30 Beds campaign.
Oracle Social: It was great to follow along with this campaign on your Marriott Rewards Facebook page. How did the campaign get started?
Michelle: The original idea actually came from a fan raving about our beds on the Marriott Rewards Facebook page. Our Marriott Rewards Facebook community is an extremely engaged group of fans. Since the idea for the “30 Beds in 30 Days” sweepstakes came from a Facebook fan, we decided to host the sweepstakes through Facebook. It was a perfect opportunity to thank the fans for their engagement, and give them the opportunity to engage with the brand every day on Facebook during the promotion, since fans could enter every day.
Oracle Social: That campaign was a great success. We’re happy for everyone who won a bed, and everyone who was more exposed to Marriott Rewards through this campaign. So what’s next for Marriott Rewards on social? We are followers of Marriott Rewards’ new Twitter handle: @MarriottRewards. What are your plans for that?
Michelle: It’s true—we are on Twitter now! We hope everyone reading this starts following us on Twitter, too. We know that Twitter is another great way for us to connect with our customers and hear their stories. We have also used it as a great way to get out news – like did you check out #SayHiToWifi? One of our first tweets from the new handle announced the news that Marriott Rewards members will receive free in-room WiFi. And, my best advice is to stay connected – something very fun is coming soon!
SLOB 2.2 Not Generating AWR reports? Testing Large User Counts With Think Time? Think Processes and SLOB_DEBUG.
I’ve gotten a lot of reports of folks branching out into SLOB 2.2 large user count testing with the SLOB 2.2 Think Time feature. I’m also getting reports that some of the same folks are not getting the resultant AWR reports one expects from a SLOB test.
If you are not getting your AWR reports there is the old issue I blogged about here (click here). That old issue was related to a Redhat bug. However, if you have addressed that problem, and still are not getting your AWR reports from large user count testing, it might be something as simple as the processes initialization parameter. After all, most folks have been accustomed to generating massive amounts of physical I/O with SLOB at low session counts.
I’ve made a few changes to runit.sh that will help future folks should they fall prey to the simple processes initialization parameter folly. The fixes will go into SLOB 188.8.131.52. The following is a screen shot of these fixes and what one should expect to see in such situation in the future. In the meantime, do take note of SLOB_DEBUG as mentioned in the screenshot:
Filed under: oracle
Let’s say you’ve narrowed your online search to two hotels near Times Square to celebrate New Year’s Eve. One of the websites gives you all the dimensions, distances, and details of the property. The other includes images of people having fun in the lobby, favorable quotes from customers, and a useful 'things to do and see' column. Which hotel has you, the customer, at the center of its marketing?
In this post published in Oracle Voice/Forbes, Jeb Dasteel, Oracle’s senior vice president and chief customer officer (pictured left), has a bit of wisdom to share with hotels and all other organizations selling products and services: “The difficult truth is that your customers don’t care about your innovation or your products; they care only about the result you can help them achieve.”
So in the scenario above, you’re not just booking a hotel room for December 31. You’re looking for a memorable experience in downtown New York City to ring in the new year. So you want the best place to achieve that result.
Dasteel teamed up with Amir Hartman of Mainstay on this article. Even though consumers want to focus on results rather than products, Mainstay's research shows that the majority of marketing dollars are spent on "developing assets and content describing product features." And Forrester Research reports that “close to 70% of business leaders find the materials companies provide them useless.” 70%? Wow!
So what to do? According to Dasteel, organizations need to communicate in a language that is meaningful to customers with a focus on business outcomes. Your customer (not your product) should be “the hero and centerpiece of the story you’re telling.”
We really hope you will study Dasteel’s insight and recommendations. It could prevent you from wasting money on meaningless marketing assets and contribute mightily to your success.
A Guest Post by Justin King, B2B E-Commerce Strategist (pictured left)E-commerce has a unique value proposition in B2B organizations. Of course, it is about customer acquisition, conversion, and average order value. But it also serves a bigger, long-term purpose.
Customers are in more control than ever before. With 43 percent of Americans retiring in the next eight years, the next generation of buyers is emerging. And B2B purchasers of all types want more online services and tools. As a result, we are witnessing the convergence of customer portals, marketing, social, service and shopping cart sites into e-commerce. E-commerce has become the digital conduit to your customers. B2B companies that deliver an exceptional e-commerce customer experience offer more control and access to their back office.
In fact, most everything we have done today in B2B e-commerce has unknowingly been to move functions from the back office to the customer. E-commerce is no longer just commerce. It is not just shopping carts and transactions. It is the primary customer facing channel between customers and your back office.
As I explained in my December 9, 2014 post, the role of the ERP is certainly increasing. However, there is more to the back office than just ERP.
It Takes an Ecosystem
If it takes a village to raise a child, it takes an ecosystem to support a new customer facing channel. A traditional ecosystem is a community of organisms linked together with nutrients and energy flows. The B2B e-commerce ecosystem is a community of systems connected together to deliver a user experience that adds value to your customers and helps them do their jobs easier. That includes:
- Enterprise Resource Planning (ERP)
- Configure, Price, and Quote (CPQ)
- Customer Relationship Management (CRM)
- Order Management System (OMS)
- Product information management (PIM)
- Content Management System (CMS)
- Experience management
- Marketing automation
Why is the Ecosystem So Big?
Everything you know about your customers and products sits in your back office, including order history, spending patterns, customer segmentation, product information and contracts to name a few. You need all of that data to build an excellent customer experience. Great customer experiences increase conversion and revenue. Most importantly, great customer experiences make B2B users’ job easier which yields loyalty. Loyal customers return to your site and will spend more.
Ecosystems are dynamic entities. They change. You will introduce new systems; upgrade some and depreciate others.
So How Do We Manage This Changing Ecosystem?
First, recognize the controlling factors. Ecosystems are controlled both by external and internal factors.
- External factors include conditioned expectations that customers bring to your site from at-home purchasing. With more customer control comes a proliferation of devices and types of experiences they choose to use.
- Internal factors include the complexity and readiness level of your ecosystem. I have a customer with more than 200 ERP systems as a result of multiple acquisitions. They are extremely sophisticated with various levels of readiness. Internal factors affect how fast an organization can move.
Next, start now and move quickly:
- Begin with the basics: If the goal is to add value and help your customers do their jobs easier, you must deliver on the basics. Help them find information on your website, focus on building great product information and supporting content, and make transacting intuitive. Give your customers a few tools outside of the purchase path like viewing invoices, purchase orders, or punchout.
- Make continuous improvements in your back office: The data you have in your back office may not be customer ready. Make it better bit by bit. Start planning the types of innovative services you might offer your customer in the future and put plans in place to ready your ecosystem for those future tools.
- Separate form from function: Integration will become a dirty word at some point in your e-commerce project. If you rely on hardcore integration techniques whenever you introduce a new system or platform, your time to market will slow to a screeching halt. By separating out the experience from the content, data, rules and workflows, you can acquire new systems and data effortlessly. And your internal staff can quickly create new experiences for all kinds of devices and form factors.
Finally, remember an engaging customer experience is about adding value to B2B buyers. Visit them, interview them, watch them work and do prototype testing in a usability lab. Innovate on behalf of your customer and then write me (email@example.com) and tell me about it.
Gary Lang, Blackboard’s senior vice president in charge of product development and cloud operations, has announced his resignation and plans to join Amazon. Gary took the job with Blackboard in June 2013 and, along with CEO Jay Bhatt and SVP of Product Management Mark Strassman, formed the core management team that had worked together previously at AutoDesk. Gary led the reorganization effort to bring all product development under one organization, a core component of Blackboard’s recent strategy.
Michael described Blackboard’s new product moves toward cloud computing and an entirely new user experience (UX) for the Learn LMS, and Gary was the executive in charge of these efforts. These significant changes have yet to fully roll out to customers (public cloud in pilot, new UX about to enter pilot). Gary was also added to the IMS Global board of directors in July 2014 – I would expect this role to change as well given the move to Amazon.
At the same time, VP Product Management / VP Market Development Brad Koch has also resigned from Blackboard. Brad came to Blackboard from the ANGEL acquisition. Given his long-term central role leading product definition and being part of Ray Henderson’s team, Brad’s departure will also have a big impact. Brad’s LinkedIn page shows that he has left Blackboard, but it does not yet show his new company. I’m holding off reporting until I can get public confirmation.
Blackboard provided the following statement from CEO Jay Bhatt.
The decision to leave Blackboard for an opportunity with Amazon was a personal one for Gary that allows him to return home to the West Coast. During his time here, Gary has made significant contributions to the strategic direction of Blackboard and the technology we deliver to customers. The foundation he has laid, along with other leaders on our product development team, will allow us to continue to drive technical excellence for years to come. We thank him for his leadership and wish him luck as he embarks on this new endeavor.
- The two resignations are unrelated as far as I can tell.
- Starting at Pearson, then at ANGEL, finally at Blackboard
The post Blackboard’s SVP of Product Development Gary Lang Resigns appeared first on e-Literate.
At Rittman Mead R&D, we have the privilege of solving some of our clients’ most challenging data problems. We recently built a set of customized data products that leverage the power of Oracle and Cloudera platforms and wanted to share some of the fun we’ve had in creating unique user experiences. We’ve been thinking about how we can lean on our efforts to help make the holidays even more special for the extended Rittman Mead family. With that inspiration, we had several questions on our minds:
- How can we throw an amazing holiday party?
- What gifts can we give that we can be sure our coworkers, friends, and family will enjoy?
- What gifts would we want for ourselves?
After a discussion over drinks, the answers became clear. We decided to create a tool that uses data analytics to help you create exceptional cocktails for the holidays.
Here is how we did it. First, we analyzed the cocktail recipes of three world-renowned cocktail bars: PDT, Employees Only, and Death & Co. We then turned their drink recipes into data and got to work on the Bar Optimizer, which uses analytics on top of that data to help you make the holiday season tastier than ever before.
To use the Bar Optimizer, enter the liquors and other ingredients that you have on hand to see what drinks you can make. It then recommends additional ingredients that let you create the largest variety of new drinks. You can also use this feature to give great gifts based on others’ liquor cabinets. Finally, try using one of our optimized starter kits to stock your bar for a big holiday party. We’ve crunched the numbers to find the fewest bottles that can make the largest variety of cocktails.
Click the annotated screenshot above for details, and contact us if you would like more information about how we build products that take your data beyond dashboards.
A conversation I have too often with vendors goes something like:
- “That confidential thing you told me is interesting, and wouldn’t harm you if revealed; probably quite the contrary.”
- “Well, I guess we could let you mention a small subset of it.”
- “I’m sorry, that’s not enough to make for an interesting post.”
That was the genesis of some tidbits I recently dropped about WibiData and predictive modeling, especially but not only in the area of experimentation. However, Wibi just reversed course and said it would be OK for me to tell more or less the full story, as long as I note that we’re talking about something that’s still in beta test, with all the limitations (to the product and my information alike) that beta implies.
As you may recall:
- WibiData started out with a rich technology stack …
- … but decided to cast itself as an application company …
- … whose first vertical market is retailing,
With that as background, WibiData’s approach to predictive modeling as of its next release will go something like this:
- There is still a strong element of classical modeling by data scientists/statisticians, with the models re-scored in batch, perhaps nightly.
- But of course at least some scoring should be done as real-time as possible, to accommodate fresh data such as:
- User interactions earlier in today’s session.
- Technology for today’s session (device, connection speed, etc.)
- Today’s weather.
- WibiData Express is/incorporates a Scala-based language for modeling and query.
- WibiData believes Express plus a small algorithm library gives better results than more mature modeling libraries.
- There is some confirming evidence of this …
- … but WibiData’s customers have by no means switched over yet to doing the bulk of their modeling in Wibi.
- WibiData will allow line-of-business folks to experiment with augmentations to the base models.
- Supporting technology for predictive experimentation in WibiData will include:
- Automated multi-armed bandit testing (in previous versions even A/B testing has been manual).
- A facility for allowing fairly arbitrary code to be included into otherwise conventional model-scoring algorithms, where conventional scoring models can come:
- Straight from WibiData Express.
- Via PMML (Predictive Modeling Markup Language) generated by other modeling tools.
- An appropriate user interface for the line-of-business folks to do certain kinds of injecting.
Let’s talk more about predictive experimentation. WibiData’s paradigm for that is:
- Models are worked out in the usual way.
- Businesspeople have reasons for tweaking the choices the models would otherwise dictate.
- They enter those tweaks as rules.
- The resulting combination — models plus rules — are executed and hence tested.
If those reasons for tweaking are in the form of hypotheses, then the experiment is a test of those hypotheses. However, WibiData has no provision at this time to automagically incorporate successful tweaks back into the base model.
What might those hypotheses be like? It’s a little tough to say, because I don’t know in fine detail what is already captured in the usual modeling process. WibiData gave me only one real-life example, in which somebody hypothesized that shoppers would be in more of a hurry at some times of day than others, and hence would want more streamlined experiences when they could spare less time. Tests confirmed that was correct.
That said, I did grow up around retailing, and so I’ll add:
- Way back in the 1970s, Wal-Mart figured out that in large college towns, clothing in the football team’s colors was wildly popular. I’d hypothesize such a rule at any vendor selling clothing suitable for being worn in stadiums.
- A news event, blockbuster movie or whatever might trigger a sudden change in/addition to fashion. An alert merchant might guess that before the models pick it up. Even better, she might guess which psychographic groups among her customers were most likely to be paying attention.
- Similarly, if a news event caused a sudden shift in buyers’ optimism/pessimism/fear of disaster, I’d test that a response to that immediately.
Finally, data scientists seem to still be a few years away from neatly solving the problem of multiple shopping personas — are you shopping in your business capacity, or for yourself, or for a gift for somebody else (and what can we infer about that person)? Experimentation could help fill the gap.
The default value of METHOD_OPT from 10g onwards is ‘FOR ALL COLUMNS SIZE AUTO’.
The definition of AUTO as per Oracle documentation is :
AUTO: Oracle determines the columns to collect histograms based on data distribution and the workload of the columns.
This basically implies that Oracle will automatically create histograms on those columns which have skewed data distribution and there are SQL statements referencing those columns.
However, this gives rise to the problem is that Oracle generates too many unnecessary histograms .
– Create a table with skewed data distribution in two columns
SQL>drop table hr.skewed purge; create table hr.skewed ( empno number, job_id varchar2(10), salary number); insert into hr.skewed select employee_id, job_id, salary from hr.employees;
– On gathering statistics for the table using default options, it can be seen that histogram is not gathered on any column although data
distribution in columns JOB_ID and SALARY is skewed
SQL>exec dbms_stats.gather_table_stats('HR','SKEWED'); col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID NONE SKEWED EMPNO NONE
– Let’s now issue some queries querying the table based on the three columns in the table followed by statistics gathering to verify that histograms get automatically created only on columns with skewed data distribution.
– No histogram gets created if column EMPNO is queried which
has data distributed uniformly
SQL>select * from hr.skewed where empno = 100; exec dbms_stats.gather_table_stats('HR','SKEWED'); col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID NONE SKEWED EMPNO NONE
– A histogram gets created on JOB_ID column as soon as we search for records with a JOB_ID as data distribution is non-uniform in JOB_ID column
SQL>select * from hr.skewed where job_id = 'CLERK'; exec dbms_stats.gather_table_stats('HR','SKEWED'); col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID FREQUENCY SKEWED EMPNO NONE
– A histogram gets created on SALARY column when search is made for employees drawing salary more than 10000 as data distribution is non-uniform in SALARY column.
SQL>select * from hr.skewed where salary < 10000; exec dbms_stats.gather_table_stats('HR','SKEWED'); col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY FREQUENCY SKEWED JOB_ID FREQUENCY SKEWED EMPNO NONE
Thus gathering statistics using default options, manually or as part of the automatic maintenance task, might lead to creation of histograms on all such columns which have skewed data distribution and had been part of the search clause even once. That is, Oracle makes even the histograms you didn’t ask for. Some of the histograms might not be needed by the application and hence are undesirable as computing histograms is a resource intensive operation and moreover they might degrade the performance as a result of their interaction with bind peeking.
Employ FOR ALL COLUMNS SIZE REPEAT option of METHOD_OPT parameter which prevents deletion of existing histograms and collects histograms only on the columns that already have histograms.
First step is to eliminate unwanted histograms and have histograms only on the desired columns.
Well, there are two options:
OPTION-I: Delete histograms from unwanted columns and use REPEAT option henceforth which Collects histograms only on the columns that already have histograms.
– Delete unwanted histogram for SALARY column
SQL>exec dbms_stats.gather_table_stats('HR','SKEWED', - METHOD_OPT => 'for columns salary size 1'); -- Verify that histogram for salary column has been deleted col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID FREQUENCY SKEWED EMPNO NONE
– Issue a SQL with salary column in where clause and verify that gathering stats using repeat option retains histogram on JOB_ID column and does not cause histogram to be created on salary column.
SQL>select * from hr.skewed where salary < 10000; exec dbms_stats.gather_table_stats('HR','SKEWED',- METHOD_OPT => 'for columns salary size REPEAT'); col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID FREQUENCY SKEWED EMPNO NONE
OPTION-II: Wipe out all histograms and manually add only the desired ones. Use REPEAT option henceforth which Collects histograms only on the columns that already have one.
– Delete histograms on all columns
SQL>exec dbms_stats.gather_table_stats('HR','SKEWED',- METHOD_OPT => 'for all columns size 1');
– Verify that histograms on all columns have been dropped
SQL>col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID NONE SKEWED EMPNO NONE
– Create histogram only on the desired JOB_ID column
SQL>exec dbms_stats.gather_table_stats('HR','SKEWED',- METHOD_OPT => 'for columns JOB_ID size AUTO');
– Verify that histogram has been created on JOB_ID
SQL>col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID FREQUENCY SKEWED EMPNO NONE
- Verify that gathering stats using repeat option creates histogram only on JOB_ID column on which it already exists
SQL>exec dbms_stats.gather_table_stats('HR','SKEWED',- METHOD_OPT => 'for columns salary size REPEAT'); SQL>col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID FREQUENCY SKEWED EMPNO NONE
That is, now Oracle will no longer make histograms you didn’t ask for.
– Finally, change the preference for METHOD_OPT parameter of automatic stats gathering job from default value of AUTO to REPEAT so that it will gather histograms only for the columns already having one.
– Get Current value –
SQL> select dbms_stats.get_prefs ('METHOD_OPT') from dual; DBMS_STATS.GET_PREFS('METHOD_OPT') ----------------------------------------------------------------------- FOR ALL COLUMNS SIZE AUTO
– Set preference to REPEAT–
SQL> exec dbms_stats.set_global_prefs ('METHOD_OPT','FOR ALL COLUMNS SIZE REPEAT');
– Verify –
SQL> select dbms_stats.get_prefs ('METHOD_OPT') from dual; DBMS_STATS.GET_PREFS('METHOD_OPT') ----------------------------------------------------------------------- FOR ALL COLUMNS SIZE REPEAT
From now onwards, gathering of statistics, manually or automatically will not create any new histograms while retaining all the existing ones.
I hope this post is useful.
Comments: 2 comments on this item
You might be interested in this:
- ORA-12528: TNS:listener: all appropriate instances are blocking new connection
- 12c RAC: ADD A NEW NETWORK INTERFACE
- SHARED POOL ARCHITECTURE
- 11g DATAGUARD SETUP USING ACTIVE DATAGUARD
- CACHE FUSION DEMONSTRATED
The post Create Histograms On Columns That Already Have One appeared first on ORACLE IN ACTION.
This is Part II in a series. Part I can be found here (click here). Part I in the series covered a very simple case of SLOB data loading. This installment is aimed at how one can use SLOB as a platform test for a unique blend of concurrent, high-bandwidth data loading, index creation and CBO statistics gathering.Put SLOB On The Box – Not In a Box
As a reminder, the latest SLOB kit is always available here: kevinclosson.net/slob .
Often I hear folks speak of what SLOB is useful for and the list is really short. The list is so short that a single acronym seems to cover it—IOPS, just IOPS and nothing else. SLOB is useful for so much more than just testing a platform for IOPS capability. I aim to make a few blog installments to make this point.SLOB for More Than Physical IOPS
I routinely speak about how to use SLOB to study host characteristics such as NUMA and processor threading (e.g., Simultaneous Multithreading on modern Intel Xeons). This sort of testing is possible when the sum of all SLOB schemas fit into the SGA buffer pool. When testing in this fashion, the key performance indicators (KPI) are LIOPS (Logical I/O per second) and SQL Executions per second.
This blog post is aimed at suggesting yet another manner of platform testing with SLOB–specifically concurrent bulk data loading.
The SLOB data loader (~SLOB/setup.sh) offers the ability to test non-parallel, concurrent table loading, index creation and CBO statistics collection.
In this blog post I’d like to share a “SLOB data loading recipe kit” for those who wish to test high performance SLOB data loading. The contents of the recipe will be listed below. First, I’d like to share a platform measurement I took using the data loading recipe. The host was a 2s20c40t E5-2600v2 server with 4 active 8GFC paths to an XtremIO array.
The tar archive kit I’ll refer to below has the full slob.conf in it, but for now I’ll just use a screen shot. Using this slob.conf and loading 512 SLOB schema users generates 1TB of data in the IOPS tablespace. Please note the attention I’ve drawn to the slob.conf parameters SCALE and LOAD_PARALLEL_DEGREE. The size of the aggregate of SLOB data is a product of SCALE and the number of schemas being loaded. I drew attention to LOAD_PARALLEL_DEGREE because that is the key setting in increasing the concurrency level during data loading. Most SLOB users are quite likely not accustomed to pushing concurrency up to that level. I hope this blog post makes doing so seem more worthwhile in certain cases.
The following is a screenshot of the output from the SLOB 2.2 data loader. The screenshot shows that the concurrent data loading portion of the procedure took 1,474 seconds. On the surface that would appear to be a data loading rate of approximately 2.5 TB/h. One thing to remember, however, is that SLOB data is loaded in batches controlled by LOAD_PARALLEL_DEGREE. Each batch loads LOAD_PARALLEL_DEGREE number of tables and then creates a unique indexes and performs CBO statistics gathering. So the overall “data loading” time is really data loading plus these ancillary tasks. To put that another way, it’s true this is a 2.5TB data loading use case but there is more going on than just simple data loading. If this were a pure and simple data loading processing stream then the results would be much higher than 2.5TB/h. I’ll likely blog about that soon.
As the screenshot shows the latest SLOB 2.2 data loader isolates the concurrent loading portion of setup.sh. In this case, the seed table (user1) was loaded in 20 seconds and then the concurrent loading portion completed in 1,474 seconds.That Sounds Like A Good Amount Of Physical I/O But What’s That Look Like?
To help you visualize the physical I/O load this manner of testing places on a host, please consider the following screenshot. The screenshot shows peaks of vmstat 30-second interval reporting of approximately 2.8GB/s physical read I/O combined with about 435 MB write I/O for an average of about 3.2GB/s. This host has but 4 active 8GFC fibre channel paths to storage so that particular bottleneck is simple to solve by adding another 4 port HBA! Note also how very little host CPU is utilized to generate the 4x8GFC saturating workload. User mode cycles are but 15% and kernel mode utilization was 9%. It’s true that 24% sounds like a lot, however, this is a 2s20c40t host and therefore 24% accounts for only 9.6 processor threads–or 5 cores worth of bandwidth. There may be some readers who were not aware that 5 “paltry” Ivy Bridge Xeon cores are capable of driving this much data loading!
NOTE: The SLOB method is centered on the sparse blocks. Naturally, fewer CPU cycles are required for loading data into sparse blocks.
Please note, the following vmstat shows peaks and valleys. I need to remind you that SLOB data loading consists of concurrent processing of not only data loading (Insert as Select) but also a unique index creation and CBO statistics gathering. As one would expect I/O will wane as the loading process shifts from the bulk data load to the index creation phase and then back again.
Finally, the following screenshot shows the very minimalist init.ora settings I used during this testing.
The recipe kit can be found in the following downloadable tar archive. The kit contains the necessary files one would need to reproduce this SLOB data loading time so long as the platform has sufficient performance attributes. The tar archive also has all output generated by setup.sh as the following screenshot shows:
The SLOB 2.2 data loading recipe kit can be downloaded here (click here). Please note, the screenshot immediately above shows the md5 checksum for the tar archive.Summary
This post shows how one can tune the SLOB 2.2 data loading tool (setup.sh) to load 1 terabyte of SLOB data in well under 25 minutes. I hope this is helpful information and that, perhaps, it will encourage SLOB users to consider using SLOB for more than just physical IOPS testing.
Filed under: oracle
It's been a long time.
This is an amazing new .FMB Forms module, kind of "Tetris" like game.
It needs the last 1.7.7 version of the LAF to run. The game is a little bit buggy, but the aim, there, is only to demonstrate what you can do with new LAF 1.7.7 dynamic shape creation and animation.
Have a good time :-)
Download the brickdown.fmb module there.
1. Start by invoking the following to show your deployed applications
[Tue Dec 16 09:32:10 papicella@:~/cf/APJ-vcloud ] $ cf apps
Getting apps in org ANZ / space development as pas...
name requested state instances memory disk urls
pas-playjava started 1/1 512M 1G pas-playjava.apj.fe.pivotal.io
pcfhawq started 1/1 512M 1G pcfhawq.apj.fe.pivotal.io
apples-spring-music started 1/1 512M 1G apples-spring-music.apj.fe.pivotal.io
pas-petclinic started 1/1 512M 1G pas-petclinic.apj.fe.pivotal.io
2. Now lets view the files for the application
[Tue Dec 16 09:33:29 papicella@:~/cf/APJ-vcloud ] $ cf files apples-spring-music
Getting files for app apples-spring-music in org ANZ / space development as pas...
3. Now lets view the contents of a specific file by providing the full path to the file, in this case our GC log file.
[Tue Dec 16 09:33:41 papicella@:~/cf/APJ-vcloud ] $ cf files apples-spring-music /app/apples_gc.log
Getting files for app apples-spring-music in org ANZ / space development as pas...
OpenJDK 64-Bit Server VM (25.40-b06) for linux-amd64 JRE (1.8.0_25--vagrant_2014_10_17_04_37-b17), built on Oct 17 2014 04:40:49 by "vagrant" with gcc 4.4.3
Memory: 4k page, physical 16434516k(1028892k free), swap 16434488k(16434476k free)
CommandLine flags: -XX:InitialHeapSize=391468032 -XX:MaxHeapSize=391468032 -XX:MaxMetaspaceSize=67108864 -XX:MetaspaceSize=67108864 -XX:OnOutOfMemoryError=/home/vcap/app/.java-buildpack/open_jdk_jre/bin/killjava.sh -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:ThreadStackSize=995 -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseParallelGC
1.522: [GC (Allocation Failure) 95744K->16737K(367104K), 0.0590876 secs]
1.786: [GC (Allocation Failure) 112481K->23072K(367104K), 0.0735813 secs]
2.075: [GC (Allocation Failure) 118816K->32499K(367104K), 0.0531070 secs]
2.315: [GC (Allocation Failure) 128243K->45124K(367104K), 0.0428136 secs]
2.893: [GC (Allocation Failure) 140868K->53805K(367104K), 0.0375078 secs]
4.143: [GC (Allocation Failure) 149549K->63701K(335360K), 0.1507024 secs]
5.686: [GC (Allocation Failure) 127701K->69319K(331776K), 0.0703850 secs]
7.060: [GC (Allocation Failure) 133319K->70962K(348672K), 0.0121269 secs]
8.458: [GC (Allocation Failure) 130866K->69734K(322560K), 0.0228917 secs]http://feeds.feedburner.com/TheBlasFromPas
“This accomplishment is a true testament to the dedicated and talented professionals responsible for expanding our Oracle practice,” said John Klein, Principal at Redstone “Oracle WebCenter Suite helps our customer’s businesses by empowering them to securely accumulate and disseminate knowledge.”
Read the entire press release here.
In this mini-series of blog posts I’m taking a look at a few very useful tools that can make your life as the sysadmin of a cluster of Linux machines. This may be a Hadoop cluster, or just a plain simple set of ‘normal’ machines on which you want to run the same commands and monitoring.
First we looked at using SSH keys for intra-machine authorisation, which is a pre-requisite executing the same command across multiple machines using PDSH, as well as what we look at in this article – monitoring OS metrics across a cluster with colmux.
Colmux is written by Mark Seger, the same person who wrote collectl. It makes use of collectl on each target machine to report back OS metrics across a cluster to a single node.
Install collectl across the cluster
Using pdsh we can easily install collectl on each node (if it’s not already), which is a pre-requisite for colmux:
pdsh -w root@rnmcluster02-node0[1-4] "yum install -y collectl && service collectl start && chkconfig collectl on"
NB by enabling the collectl service on each node it will capture performance data to file locally, which colmux can replay centrally.
Then install colmux itself, which you can download from Sourceforge. It only needs to be actually installed on a single host, but obviously we could push it out across the cluster with pdsh if we wanted to be able to invoke it on any node at will. Note that here I’m running it on a separate linux box (outside of the cluster) rather than on my Mac:
cd /tmp # Make sure you get the latest version of collectl-utils, from https://sourceforge.net/projects/collectl-utils/files/ # This example is hardcoded to a version and particular sourceforge mirror curl -O http://garr.dl.sourceforge.net/project/collectl-utils/collectl-utils-4.8.2/collectl-utils-4.8.2.src.tar.gz tar xf collectl-utils-4.8.2.src.tar.gz cd collectl-utils-4.8.2 sudo ./INSTALL # collectl-utils also includes colplot, so if you might want to use it restart # apache (assuming it's installed) sudo service httpd restartColmux and networking
Couple of important notes:
- The machine you run colmux from needs to have port 2655 open in order for each node’s collectl to send back the data to it.
You also may encounter an issue if you have any odd networking (eg NAT on virtual machines) that causes colmux to not work because it picks the ‘wrong’ network interface of the host to tell collectl on each node to send its data to. Details and workaround here.
colmux -addr 'rnmcluster02-node0[1-4]' -username root
# Mon Dec 1 22:20:40 2014 Connected: 4 of 4 # <--------CPU--------><----------Disks-----------><----------Network----------> #Host cpu sys inter ctxsw KBRead Reads KBWrit Writes KBIn PktIn KBOut PktOut rnmcluster02-node01 1 1 28 36 0 0 0 0 0 2 0 2 rnmcluster02-node04 0 0 33 28 0 0 36 8 0 1 0 1 rnmcluster02-node03 0 0 15 17 0 0 0 0 0 1 0 1 rnmcluster02-node02 0 0 18 18 0 0 0 0 0 1 0 1
Real-time view, persisted
-cols puts the hosts across the top and time as rows. Specify one or more columns from the output without
-cols. In this example it is the values for
cpu value, along with the disk read/write (columns 1, 5 and 7 of the metrics as seen above):
colmux -addr 'rnmcluster02-node0[1-4]' -user root -cols 1,5,7
cpu KBRead KBWrit node01 node02 node03 node04 | node01 node02 node03 node04 | node01 node02 node03 node04 0 0 0 0 | 0 0 0 0 | 12 28 0 0 0 0 0 0 | 0 0 0 0 | 12 28 0 0 1 0 1 0 | 0 0 0 0 | 0 0 0 0 0 0 0 0 | 0 0 0 0 | 0 0 0 0 0 0 0 0 | 0 0 0 0 | 0 0 0 0 0 0 0 0 | 0 0 0 0 | 0 20 0 0 0 0 0 0 | 0 0 0 0 | 52 4 0 0 0 0 0 2 | 0 0 0 0 | 0 0 0 0 1 0 0 0 | 0 0 0 0 | 0 0 0 0 15 16 15 15 | 0 4 4 4 | 20 40 32 48 0 0 1 1 | 0 0 0 0 | 0 0 4 0 1 0 0 0 | 0 0 0 0 | 0 0 0 0
To check the numbers of the columns that you want to reference, run the command with the
colmux -addr 'rnmcluster02-node0[1-4]' -user root --test >>> Headers <<< # <--------CPU--------><----------Disks-----------><----------Network----------> #Host cpu sys inter ctxsw KBRead Reads KBWrit Writes KBIn PktIn KBOut PktOut >>> Column Numbering <<< 0 #Host 1 cpu 2 sys 3 inter 4 ctxsw 5 KBRead 6 Reads 7 KBWrit 8 Writes 9 KBIn 10 PktIn 11 KBOut 12 PktOut
And from there you get the numbers of the columns to reference in the
To include the timestamp, use
-oT in the
-command and offset the column numbers by 1:
colmux -addr 'rnmcluster02-node0[1-4]' -user root -cols 2,6,8 -command '-oT'
sys Reads Writes #Time node01 node02 node03 node04 | node01 node02 node03 node04 | node01 node02 node03 node04 22:24:50 0 0 0 0 | 0 0 0 0 | 0 0 0 0 22:24:51 1 0 0 0 | 0 0 0 0 | 0 0 0 0 22:24:52 0 0 0 0 | 0 0 0 0 | 0 16 0 16 22:24:53 1 0 0 0 | 0 0 0 0 | 36 0 16 0 22:24:54 0 0 0 1 | 0 0 0 0 | 0 0 0 0 22:24:55 0 0 0 0 | 0 0 0 0 | 0 20 32 20
NB There’s a bug with colmux 4.8.2 that prevents you accessing the first metric with
-cols when you also enable timestamp
-oT – details here.
Collectl (which is what colmux calls to get the data) can fetch metrics from multiple subsystems on a node. You can access all of these through colmux too. By default when you run colmux you get cpu, disk and network but you can specify others using the
-s argument followed by the subsystem identifier.
To examine the available subsystems run collectl on one of the target nodes:
[root@rnmcluster02-node01 ~]# collectl --showsubsys The following subsystems can be specified in any combinations with -s or --subsys in both record and playbackmode. [default=bcdfijmnstx] These generate summary, which is the total of ALL data for a particular type b - buddy info (memory fragmentation) c - cpu d - disk f - nfs i - inodes j - interrupts by CPU l - lustre m - memory n - network s - sockets t - tcp x - interconnect (currently supported: OFED/Infiniband) y - slabs
From the above list we can see that if we want to also show memory detail alongside CPU we need to include m and c in the subsystem list:
colmux -addr 'rnmcluster02-node0[1-4]' -user root -command '-scm'
# Tue Dec 2 08:02:38 2014 Connected: 4 of 4 # <--------CPU--------><-----------Memory-----------> #Host cpu sys inter ctxsw Free Buff Cach Inac Slab Map rnmcluster02-node02 1 0 19 18 33M 15M 345M 167M 30M 56M rnmcluster02-node04 0 0 30 24 32M 15M 345M 167M 30M 56M rnmcluster02-node03 0 0 30 36 32M 15M 345M 165M 30M 56M rnmcluster02-node01 0 0 16 16 29M 15M 326M 167M 27M 81MChanging the sample frequency
To change the sample frequency use the
-i syntax in
colmux -addr 'rnmcluster02-node0[1-4]' -user root -command '-scm -i10 -oT' -cols 2,4
Samples every 10 seconds:
sys ctxsw #Time node01 node02 node03 node04 | node01 node02 node03 node04 08:06:29 -1 -1 -1 -1 | -1 -1 -1 -1 08:06:39 -1 -1 -1 -1 | -1 -1 -1 -1 08:06:49 0 0 0 0 | 14 13 15 19 08:06:59 0 0 0 0 | 13 13 17 21 08:07:09 0 0 0 0 | 19 18 15 24 08:07:19 0 0 0 0 | 13 13 15 19 08:07:29 0 0 0 0 | 13 13 14 19 08:07:39 0 0 0 0 | 12 13 13 19Column width
colmux -addr 'rnmcluster02-node0[1-4]' -user root -command '-scm' -cols 1 -colwidth 20
cpu rnmcluster02-node01 rnmcluster02-node02 rnmcluster02-node03 rnmcluster02-node04 -1 -1 -1 -1 -1 -1 -1 -1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 1 0 1 0 0 1 0 0Playback
As well as running interactively, collectl can run as a service and record metric samples to disk. Using colmux you can replay these from across the cluster.
-p and the path to the collectl log files (assumes that it is the same on each host). As with real-time mode, for different subsystems change the flags after
colmux -addr 'rnmcluster02-node0[1-4]' -user root -command '-p /var/log/collectl/*20141201* -scmd -oD'
[...] # 21:48:50 Reporting: 4 of 4 # <--------CPU--------><-----------Memory-----------><----------Disks-----------> #Host Date Time cpu sys inter ctxsw Free Buff Cach Inac Slab Map KBRead Reads KBWrit Writes rnmcluster02-node04 20141201 21:48:50 0 0 17 15 58M 10M 340M 162M 30M 39M 0 0 1 0 rnmcluster02-node03 20141201 21:48:50 0 0 11 13 58M 10M 340M 160M 30M 39M 0 0 0 0 rnmcluster02-node02 20141201 21:48:50 0 0 11 15 58M 10M 340M 163M 29M 39M 0 0 1 0 rnmcluster02-node01 20141201 21:48:50 0 0 12 14 33M 12M 342M 157M 27M 63M 0 0 1 0 # 21:49:00 Reporting: 4 of 4 # <--------CPU--------><-----------Memory-----------><----------Disks-----------> #Host Date Time cpu sys inter ctxsw Free Buff Cach Inac Slab Map KBRead Reads KBWrit Writes rnmcluster02-node04 20141201 21:49:00 0 0 17 15 58M 10M 340M 162M 30M 39M 0 0 4 0 rnmcluster02-node03 20141201 21:49:00 0 0 13 14 58M 10M 340M 160M 30M 39M 0 0 5 0 rnmcluster02-node02 20141201 21:49:00 0 0 12 14 58M 10M 340M 163M 29M 39M 0 0 1 0 rnmcluster02-node01 20141201 21:49:00 0 0 12 15 33M 12M 342M 157M 27M 63M 0 0 6 0 # 21:49:10 Reporting: 4 of 4 # <--------CPU--------><-----------Memory-----------><----------Disks-----------> #Host Date Time cpu sys inter ctxsw Free Buff Cach Inac Slab Map KBRead Reads KBWrit Writes rnmcluster02-node04 20141201 21:49:10 0 0 23 23 58M 10M 340M 162M 30M 39M 0 0 1 0 rnmcluster02-node03 20141201 21:49:10 0 0 19 24 58M 10M 340M 160M 30M 39M 0 0 2 0 rnmcluster02-node02 20141201 21:49:10 0 0 18 23 58M 10M 340M 163M 29M 39M 0 0 2 1 rnmcluster02-node01 20141201 21:49:10 0 0 18 24 33M 12M 342M 157M 27M 63M 0 0 1 0 [...]
Restrict the time frame by adding to
-command the arguments
[oracle@rnm-ol6-2 ~]$ colmux -addr 'rnmcluster02-node0[1-4]' -user root -command '-p /var/log/collectl/*20141201* -scmd -oD --from 21:40:00 --thru 21:40:10' # 21:40:00 Reporting: 4 of 4 # <--------CPU--------><-----------Memory-----------><----------Disks-----------> #Host Date Time cpu sys inter ctxsw Free Buff Cach Inac Slab Map KBRead Reads KBWrit Writes rnmcluster02-node04 20141201 21:40:00 0 0 16 14 59M 10M 340M 162M 30M 39M 0 0 0 0 rnmcluster02-node03 20141201 21:40:00 0 0 12 14 58M 10M 340M 160M 30M 39M 0 0 8 1 rnmcluster02-node02 20141201 21:40:00 0 0 12 15 59M 10M 340M 162M 30M 39M 0 0 6 1 rnmcluster02-node01 20141201 21:40:00 0 0 13 16 56M 11M 341M 156M 27M 42M 0 0 7 1 # 21:40:10 Reporting: 4 of 4 # <--------CPU--------><-----------Memory-----------><----------Disks-----------> #Host Date Time cpu sys inter ctxsw Free Buff Cach Inac Slab Map KBRead Reads KBWrit Writes rnmcluster02-node04 20141201 21:40:10 0 0 26 33 59M 10M 340M 162M 30M 39M 1 0 10 2 rnmcluster02-node03 20141201 21:40:10 0 0 20 31 58M 10M 340M 160M 30M 39M 0 0 4 1 rnmcluster02-node02 20141201 21:40:10 0 0 23 35 59M 10M 340M 162M 30M 39M 3 0 9 2 rnmcluster02-node01 20141201 21:40:10 0 0 23 37 56M 11M 341M 156M 27M 42M 4 1 4 1 [oracle@rnm-ol6-2 ~]$colmux reference
You can find more about colmux from the website:
as well as the built in man page
As a little bonus to the above, colmux is part of the collectl-utils package, which also includes colplot, a gnuplot-based web tool that renders collectl data into graphs. It’s pretty easy to set up, running under Apache just fine and just needing gnuplot installed if you haven’t already. It can report metrics across a cluster if you make sure that you first make each node’s collectl data available locally to colplot.
Navigating to the web page shows the interface from which you can trigger graph plots based on the collectl data available:
colplot’s utilitarian graphs are a refreshing contrast to every webapp that is built nowadays promising “beautiful” visualisations (which no doubt the authors are “passionate” about making “awesome”):
The graphs are functional and can be scaled as needed, but each change is a trip back to the front page to tweak options and re-render:
For me, colplot is an excellent tool for point-in-time analysis and diagnostics, but for more generalised monitoring with drilldown into detail, it is too manual to be viable and I’ll be sticking with collectl -> graphite -> grafana with its interactive and flexible graph rendering:
Do note however that colplot specifically does not drop data points, so if there is a spike in your data you will see it. Other tools (possibly including graphite but I’ve not validated this) will, for larger timespans average out data series so as to provide a smoother picture of a metric (eg instead of a point every second, maybe every ten seconds). If you are doing close analysis of a system’s behaviour in a particular situation this may be a problem. If you are wanting more generalised overview of a system’s health, with the option to drill into data historical as needed, it will be less of an issue.Summary
When working with multiple Linux machines I would first and foremost make sure SSH keys are set up in order to ease management through password-less logins.
After SSH keys, I would recommend pdsh for parallel execution of the same SSH command across the cluster. It’s a big time saver particularly when initially setting up the cluster given the installation and configuration changes that are inevitably needed.
To monitor a cluster I would always recommend collectl as the base metric collector. colmux works excellently for viewing these metrics from across the cluster in a single place from the commandline. For viewing the metrics over the longer term you can either store them in (or replay them into) Graphite/Carbon, and render them in Grafana. You have the option of colplot too since this is installed as part of colmux.
So now your turn – what particular tools or tips do you have for working with a cluster of Linux machines? Leave your answers in the comments below, or tweet them to me at @rmoff.
For simple e-mails this works fine, but with some emails I get the error:
0An error occurs while processing the XPath expression; the expression is ora:getAttachmentProperty('Content-Type', 'ReceiveMessage_ReceiveNotification_InputVariable','body', '/ns2:message/ns2:attachment[$AttachmentId]').XPath expression failed to execute.
An error occurs while processing the XPath expression; the expression is ora:getAttachmentProperty('Content-Type', 'ReceiveMessage_ReceiveNotification_InputVariable','body', '/ns2:message/ns2:attachment[$AttachmentId]').
The XPath expression failed to execute; the reason was: java.lang.RuntimeException: Failed to decode properties string ,att.contentId=1,Content-Type=multipart/related;
boundary "---- _Part_188_790028878.1418047669530",.
Check the detailed root cause described in the exception message text and verify that the XPath query is correct.
XPath expression failed to execute
This occurs with both the functions ora:getAttachmentProperty and ora:getAttachmentContent.
It turns out that attachements that fail, are embedded attachments, like company logo's in autographs. They have a content-type like: "multipart/related", see the propertie string like:
,att.contentId=1,Content-Type=multipart/alternative;But 'real' attachments did work, and had a properties string like:
boundary "---- _Part_21_137200243.1418648527909",/related;
Content-Transfer-Encoding=base64,Content-Disposition=attachment,att.contentId=2,Content-ID=<58BB5695B386104EA778D6E3C982C79D@caci.nl>,Content-Type=image/jpeg; name DataSources.jpg,
Apparently, if the content-type in the properties string does not start with 'multipart', then the attachment is processable.
When you open the BPEL ReceiveMessage_ReceiveNotification_InputVariable input variable I got something like:
<part xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="body">.
In the Soa Infra database a table exists named 'ATTACHMENTS', with three columns: 'KEY', 'ATTACHMENT' and 'PROPERTIES'.
When you do a select on that table where the key the @href attribute, then you'll find the attachment. The PROPERTIES column contains the properties string like mentioned above.
So I created a DB Adapter config, with JNDI name 'eis/DB/SOA' (referring to already configured datasource in the DB Adapter), on this table. In the for-each loop I first query the attachment using the href attribute.
Then I extract the content-type using the expression
In an IF statement using an expression like
not(starts-with($contentType, 'multipart'))I only process those attachments that has content-type that is not multipart.
Probably a more sophisticated expression can be found.I could check on a list of only supporting mime-types.
To me this seems like a bug: SOASuite should be able to get the available properties out of the string to be able to process this.The main problem is also that the BPEL fault-handler is ignored: when the error occurs, the process fails! So I can't catch the exception and act on it. And honestly: I should not query the table myself, using the DB adapter, do I?
By the way: working with 12c (12.1.3), but I assume the same will occur in 11g.
I'll present here 3 ways to run a query for each result of another query. Let's take an exemple: get all executions plan (select from dbms_xplan.display_cursor) for each of my queries (identified from v$sql). The 90's way was to run the first query, which generates the second queries into a spool file, and execute that file. Here are easier ways, some of them coming from 12c new features lateral join and implicit statement result.