Feed aggregator

Enable Process Analytics in BPM12c

Darwin IT - Wed, 2016-03-23 12:07
To be able to use BAM12c together with BPM12c, you'll need to enable process analytics. This means that only when that is enabled BAM12c will write the sample data to the proces cubes/star schema.

To do so you'll need to go to the enterprise manager (eg. http://darlin-vce-db:7001/em). Then open up the System MBean Browser. This can be started from the soa-infra:

And than from the SOA Infrastructure -> Administration -> System MBean Browser:


However, you can also start it a little quicker from the Weblogic Domain menu:
In the MBean Browser look for 'Application Defined MBeans':
Than look for 'oracle.as.soainfra.config'-> 'your server' -> AnalyticsConfig -> analytics:

Then in the pane make sure that both 'DisableAnalytics' and 'DisableProcessMetrics' are set to false:


 And click 'Apply'.

Above you'll see the layout of 12.2.1, but in 12.1.3 it works the same. Restart the SOA Server after that.

I'm not the first one to write about these changes, but I found that you can only update these fields if you have started the BAM server at least once. Apparently the BAM Server registers itself so that only after that you can update and apply these attributes.




BAM 12c: Extent Data objects

Darwin IT - Wed, 2016-03-23 11:42
BAM 12c is a huge improvement against 11g. Best thing I think is that it is quite a lot easier to create a dashboard. There are several tutorials on BAM, for instance at the BAM12c site, so I'm not going to explain how to create a dashboard here.

One thing however on business queries: the examples mostly start with a BPM process and then query from the Process of Activity Data Object as created on deployment of the particular process. How-ever, often you'll find that you want to filter on a certain date range, for instance process started less than a day or a week ago. Or activities running less than an hour, or between an hour and two hours, two and three hours or longer. But then you'll find that you can't filter in the Business Queries on a date function. For instance you can't filter on something like  '{process start date} < now() - 7'.

To solve that you can add extra Calculated Fields that return yes or no or  1 or 0 if a certain date calculation condition is met. To do so go to the administration tab of the BAM Composer (eg. http://darlin-vce-db:7006/bam/composer):

 Then you can expand the Data Objects and you'll find that the process that is deployed resulted in two Dataobjects, one for Activities and one or the Process-instances:

By the way, to get those you can need to have the process analytics enabled. I'll explain that in another blog.

Click for instance on the CustomerSurvey Activity, then on the tab 'Calculated Fields' and then on 'Add Calculated Field':
d

You need to provide a name that has no spaces but only lowercase or uppercase letters and underscores. Then you can provide a name that is shown in the designer and in flat queries. The column type can be measure, dimension or attribute, but in this case you'll want attribute, to be able to filter on it. In this case I returned 'J' or 'N' for 'Ja' (Yes) or 'Nee' (No). This is sufficient for filtering. But if you want to count/summarize instances that are running less than one hour, or between one or two hours, etc., then you might want to return 1 or 0.

Click on OK and then save:

By clicking on the pencil-icon you can edit the field.

I'll provide some other examples that I found helpfull for the activity dataobject:



Field NameDisplay NameColumn TypeExpressiondescriptionactivity_started_lt_week_agoActivity started less than week agoAttributeIF(DATEDIFF(SQL_TSI_DAY,{Activity Start Time},now())<=7)THEN("J")ELSE("N")Is the activity started at most 7 days ago? (J/N)activity_started_lt_day_agoActivity started less than day agoAttributeIF(DATEDIFF(SQL_TSI_HOUR,{Activity Start Time},now())<=24)THEN("J")ELSE("N")Is the activity started at most 24 hours ago? (J/N)Activiteit_Looptijd_minActiviteit Loop tijd (min)AttributeIF({Activity Instance Status}=="ACTIVE")THEN(DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},now()))ELSE(DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},{Activity End Time}))Actual running time of the activity instance. If the instance is active, than the result is the difference between the start time and the current time (NOW()), otherwise it is the difference between de  start time and the end time. The "Activity Running Time" is aparently different from the predefined runningtime field, because of the sampling moments. Sometimes the Running time is near to zero, while the instance is still active. Activiteit_Looptijd_lt_1hrActiviteit Looptijd < 1 uurAttributeIF({Activity Instance Status}=="ACTIVE")&&(DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},now())<60 td="">Is Activity Running less than an hour?Activiteit_Looptijd_lt_2hrActiviteit Looptijd < 2 uurAttributeIF({Activity Instance Status}=="ACTIVE")&&(DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},now())>=60&&DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},now())<120 td="">Is Activity Running more than one but less than two hours?Activiteit_Looptijd_lt_3hrActiviteit Looptijd< 3 uurAttributeIF({Activity Instance Status}=="ACTIVE")&&(DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},now())>=120&&DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},now())<180 td="">Is Activity Running more than two but less than three hours?Activiteit_Looptijd_gt_maxActiviteit Looptijd > maxAttributeIF({Activity Instance Status}=="ACTIVE")&&(DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},now())>180)THEN(1)ELSE(0)Is Activity Running 3 hours or longer?Activiteit_is_openActiviteit is open?AttributeIF({Activity Instance Status}=="ACTIVE")THEN("J")ELSE("N")Is the activity still Open?
For the process Data Objects these are a good starting point:
Field NameDisplay NameColumn TypeExpressiondescriptionProcess_Running_Time_Min_attrProcess Running Time (Min) AttrAttribute{Process Running Time (millisecs)}/600000Number of minutes a process is executed. There is another comparable field already defined, but that is of type 'Measurement'. You can't use that for  analytid functions as AVG, MIN, MAX, etc.process_started_lt_week_agoProcess started less than week agoAttributeIF(DATEDIFF(SQL_TSI_DAY,{Process Start Time},now())<=7)THEN("J")ELSE("N")Is the process instantie started at most 7 days ago? (J/N)process_started_lt_day_agoProcess started less than day agoAttributeIF(DATEDIFF(SQL_TSI_HOUR,{Process Start Time},now())<=24)THEN("J")ELSE("N")Is the process instance started at most 24 hours ago? (J/N)Process_Looptijd_in_minProcess Looptijd (min)AttributeIF({Process Instance Status}=="ACTIVE")THEN(DATEDIFF(SQL_TSI_MINUTE,{Process Start Time},now()))ELSE(DATEDIFF(SQL_TSI_MINUTE,{Process Start Time},{Process End Time}))Actual running time of the process instance. If the instance is active, than the result is the difference between the start time and the current time (NOW()), otherwise it is the difference between de  start time and the end time. The "Process Running Time" is aparently different from the predefined runningtime field, because of the sampling moments. Sometimes the Running time is near to zero, while the instance is still active. So these help you in filter and aggregate on activity and process running times. Sorry for the dutch names, but I figure you can get the meaning.

The expressions are based on info I got from the user guide. You can find the 12.2.1 user guide over here. The 12.1.3 can be found here. Look for chapter 13.8 (in the 12.2.1 user guide) or 14.8 (in the 12.1.3 user guide).

pulling X-Auth-Token from login

Pat Shuff - Wed, 2016-03-23 11:00
I am a big scripting guy. I believe in automating as much as possible and having a program do as much as possible and me typing as little as possible. I find it easier to use command lines than drag and drop interfaces. I have been struggling with how to script the REST apis for Oracle Cloud Services and wanted to get some feedback on different ways of doing this. I wanted to script creation of a database for some workshops that I typically give. The first step is creating the storage containers for the database backup.

Realize that the blogging software that is used does not allow me to type in "c url" without the space. If you see "c url" somewhere in this text, take out the space.

Most of the information that I got is from an online tutorial around creating storage containers. I basically boiled this information down and customized it a little to script everything.

First, authentication can be obfuscated by hiding the username and password in environment variables. I typically use a Mac so everything works well in a Terminal Window. On Windows 7 I use CygWin-64 which includes Unix like commands that are good for scripting. The firs tsetp is to hide the username, identity domain, and password in environment variables.

  • export OPASS=password
  • export OUID=username
  • export ODOMAIN=identity_domain
In my case, the identity domain is metcsgse00026. The username is cloud.admin. The password is given to me when I log into the demo.oracle.com system corresponding to this identity domain. What I would type in is
  • export OPASS=password
  • export OUID=cloud.admin
  • export ODOMAIN=metcsgse00026
The first step required is authentication. You need to log into the cloud service using the RESTapi to generate an X-Auth-Token. This is done with a GET command using the "c url" command.

c url -v -X GET -H "X-Storage-User: Storage-$ODOMAIN:$OUID" -H "X-Storage-Pass: $OPASS" https://$ODOMAIN.storage.oraclecloud.com/auth/v1.0

Note the -v is for verbose and displays everything. If you drop the -v you don't get back the return headers. Passing the -i might be a better option since the -v echos the user password and the -i only replies back with the tokens that you are interested in.

c url -i -X GET -H "X-Storage-User: Storage-$ODOMAIN:$OUID" -H "X-Storage-Pass: $OPASS" https://$ODOMAIN.storage.oraclecloud.com/auth/v1.0 In our example, this returned

HTTP/1.1 200 OK
date: 1458658839620
X-Auth-Token: AUTH_tkf4e26780c9e6b1d171f3dbeafa194cac
X-Storage-Token: AUTH_tkf4e26780c9e6b1d171f3dbeafa194cac
X-Storage-Url: https://storage.us2.oraclecloud.com/v1/Storage-metcsgse00026
Content-Length: 0
Server: Oracle-Storage-Cloud-Service

When you take this output and try to strip the X-Auth-Token from the header you get a strange output and need to add -is to the command to suppress timing of the outputs.

% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0

If you add the grep "X-Auth-Token followed by awk '{print $2]' you get back just the AUTH_string which is what we are looking for.

c url -is -X GET -H "X-Storage-User: Storage-metcsgse00026:cloud.admin" -H "X-Storage-Pass: $OPASS" https://metcsgse00026.storage.oraclecloud.com/auth/v1.0 | grep -s "X-Auth-Token" | awk '{print $2}'

AUTH_tkf4e26780c9e6b1d171f3dbeafa194cac

Google as a Cloud Provider?

Tim Hall - Wed, 2016-03-23 06:40

cloudI saw a tweet this morning that pointed me to this article.

Google To Challenge Amazon, Microsoft In Cloud Computing War

 

This comes hot on the heels of this article.

Google dumps ISP email support. Virgin Media takes ball, stomps home

I use a lot of Google services and I like them. Having said that, I just can’t bring myself to take their Google Cloud Platform seriously. It’s not that I don’t believe they have the capability to do cloud. The are Google after all. :) It’s more about trusting their services will exist in the future. If they are happy to dump 4.6 million email customers in one shot, why should I believe they give a crap about my IaaS stuff?

This kind of behaviour is not new from Google. They have taken an axe to many services before, but this seems so much more dramatic and significant from a company that is pushing their public cloud agenda.

Now it all comes down to money, and I guess Google couldn’t make enough off the this ISP email customer, but it is still a worrying signal. People should always have an exit strategy for every cloud project, but with Google it seems like it should be a bigger priority.

Maybe I’m just being paranoid. Maybe I’m not. I just feel unnerved.

Cheers

Tim…

Google as a Cloud Provider? was first posted on March 23, 2016 at 12:40 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

GDC 2016 – Part 2: The State of VR

Oracle AppsLab - Wed, 2016-03-23 03:31

VR is big and is going to be really big for the game industry, and you could feel it in the air at the GDC 2016. For the first time, GDC added two days of VR development-focused events and sessions, and most of VR sessions were packed – the lines to the VR sessions were long, even 30 minutes before the sessions, and many people could be turned away. The venue for VR sessions had to be changed to double the capacities for day 2.

There was lots of interest and enthusiasm among game designers, developers and business guys, as VR represents a brand new direction, new category, and new genre for games!

It is still at the dawn of VR games, with hardware, software, contents, approaches, etc. starting to come together. Based on what I learned during GDC, I’d like to summarize the state of various aspects of VR development.

1. VR Headset

This is the first thing that comes to our mind when we talk about VR, right? After all, the immersive experience is cast to our minds while covering ourselves with the VR headset. There are a couple VR headsets available on market, and slew of VR headsets to be debuted very soon.

VR Headset

VR Headset

From $10 Google cardboard, to $100 Samsung Gear VR, to >$1000 custom rig, the price of a VR headset is on a wide spectrum, and so are capability and performance. Most people who want to get hold of VR will likely choose one among Samsung Gear VR, PlayStation VR, Oculus Rift, and HTC Vive. Here I will do a brief comparison so you have some ideas of what you can get.

Samsung Gear VR

It uses specific Samsung phones to show VR content, so the performance is low as it is limited by the phone hardware, usually at 60fps. It has a built-in touchpad for input, but you may also use an optional gamepad. It has no wire to connect to PC, so you can spin around on a chair and not worry about tangling yourself. It has no position tracking.

If you own a Samsung S6/S7, or Edge version, why not get the Gear VR to experience the magic? $99 seems to be really inexpensive for any new gadget.  Even if you have non-Samsung phone, you can still slip it into the rig and use Gear VR as a advanced version of Cardboard viewer. Of course, you will not have control pad capability.

PlayStation VR

It uses PS4 to run VR games, so it has real game-grade hardware to run VR content at 120fps, with consistent high performance. For inputs, it has a gamepad and tracked controllers, like holding a beacon with light bulb. It has small position tracking.

The unique part with PSVR is that it is supposed to play with other regular gamers on TV screens, making it a party game in your living room. The person with PSVR will have immersive feeling in the game, while others on TV can fight with or play along with the guy (a game character with VR headset) in a game. If you have a PS4 at home, then shelling out another $399 seems to be reasonable for decent experience of VR games. But you’ll have to wait until October 2016 to buy one, right before the holiday season.

Oculus Rift

This is expected to be a high-end VR headset, with games running on a powerful Oculus-ready computer. It will have very high performance, showing VR content at 120fps or higher. It will have a wire connected to computer, so that would limit you not to spin too much of 360 degree. It has small position tracking too. It does not come cheap at price $599, but well you can get it pretty much now in March.

HTC Vive

It is considered to be even higher spec than Oculus Rift. It will require a muscular PC, with motion sensor and motion controllers attached to it, and it will deliver very high performance for VR games. It has tracked hands for input, and provides room-scale position tracking, which is above everyone else. To designers / developers, this room-scale tracking capability may give another dimension for experiments.

It costs $799, because it is high-end hardware and bundled with a bunch of bells and whistles. And you can expect to get it in April if you pre-order one now.

Others

HoloLens is always another interesting device for VR/AR. Also rumor has it that Google is building a VR headset too – will be much more powerful than its Cardboard version.

2. Game Engine for VR

Recent trend indicates that Game Engine companies are making it easier (or free) for people to access game engine software and develop game on it. There were quite number of sessions covering detail topics on specific game engine, but based on my impression, here is the list to try out.

Unity 5.3  by Unity Technologies – It has a free version (Personal Edition) with full features. I believe it is most popular and widely-used game engine, with cross platform deployment to full range of mobile, VR, desktop, Web, Console and TV. Also many of the  alt.ctrl.GDC exhibits utilized Unity to create game for controllers to interact with.

Unreal Engine 4  by Epic Games – It is a sophisticated game engine used to develop some AAA games. They also showcased two VR games Bullet Train and Showdown. The graphics and visual effect looks astonishing.

Lumberyard game engine by Amazon

Lumberyard game engine by Amazon

Lumberyard by Amazon – It is a new entry to the engine game, but it is free with full source, meaning you can tweak the engine if necessary. It would be a good choice if developing online game, and no need to worry about hosting a robust game. I guess that’s where Amazon wants to get a share of the game. It is not supporting VR yet, but will add such support very soon.

3. Capture Device

For many VR games, designers/developers would just create virtual game world using game engine and other graphical software. But in order to show real world event inside VR world, you will need special video camera, which can take 360 degree, or spherical photos and videos.

Spherical Video Capture Device

Spherical Video Capture Device

Well, most of us may not have seen or used this type of camera, including me, and so I don’t have any opinions on them. I did use native Camera App on Android device to capture spherical photos, but it was difficult to take many shots and stitch them together.

Stereoscopic Video Capture Device

Stereoscopic Video Capture Device

A step further is the stereoscopic video capturing, which takes two photographs of the same object at slightly different angle to produce depth. These are high-end professional rigs, with many custom-built versions. The price range could easily go above $10k.

This area is still quite fluid, and not sure if it would ever go mainstream. Hope some consumer version in reasonable price range will become available, so we can produce some VR videos too.

4. Convention and Best Practice

With real VR game titles under 100 in total, people in the VR field are still trying to figure things out, and no clear convention has yet surfaced for designers, developers and players.

In some sessions, VR game designers and developers did share the lessons they have learned while producing their first several VR games, like interaction patterns, reality trade-off (representational, experiential, and interaction fidelity), and fidelity contract in terms of physics rule, affordance, narrative expectations. Audio (binaural audio) and visual effects will too help realize an immersive experience.

We shall see more and more “best practices” converging together with more research in VR psychology and UX, some conventions will emerge to put designers and players on the same page.

5. Area of Use

By far games is the most natural fit for VR experience, and the entire game industry is driving toward it. Cinematic VR will be another great fit, as ILM X Lab demonstrated in “Star Wars,” viewer may “attach” to different characters to experience various view points in the movie.

People also explored VR as a new way of storytelling in journalism, a new way of exercise for sports (e.g. riding stationary bike in gym feels much like driving Humvee car in war zone), and a new way of education, e.g. going into a machine and looking at the inner mechanism of an engine.

VR brings another aspect of artistic expression as new art media, challenges us to advance technology to a new frontier, and at the same time, provides us with great opportunities.

Things are just getting started!Possibly Related Posts:

VR Skeptic: Making VR Comfortable with Apple TV

Oracle AppsLab - Wed, 2016-03-23 01:36

We are still in the early days of virtual reality. Just as in the early days of manned flight, this is a time of experimentation.

flight_vs_VR

Current VR experiments resemble early manned flight experiments

What do we wear on our heads? Helmets? Goggles? Contact lenses? Or do we simply walk into a cave or dome or tank? What do we wear or hold in our hands? Game controllers? Wands? Glowing microphones? Bracelets, armbands, and rings? Or do we just flap our arms in the breeze? Do we sit? Stand? Walk on a treadmill? Ride a bike? Or do we wander about bumping into furniture and each other?

As a person who prefers to go through life in a reclining position, most of these options seem like too much bother. I have a hard time imagining how VR could become ubiquitous in the enterprise if employees have to constantly pull on complicated headgear, or tether themselves to some contraption, or fight for access to an expensive VR cave. VR in the workplace must be ergonomic, safe, and easy to use even before you’ve had your morning coffee.

Lately I’ve been enjoying VR content, goggle-free, from the comfort of my lazyboy using an Apple TV app called Littlstar. Instead of craning my head back and forth, I just slide my thumb to and fro on the Apple remote. I can fly through the air and swim with the dolphins without working up a sweat or stepping on a cat.

Selection screen for VR content on Littstar Apple TV app

Selection screen for VR content on Littstar Apple TV app

To be clear: watching VR content on TV is NOT real VR. It’s nowhere near as immersive. But the content is the same and the experience is surprisingly good. Navigation is actually better: because it is effortless I am more inclined to keep looking around.

The Apple remote strikes me as the perfect VR controller. It is light as a feather, easy to hold, lets you pan and drag and click and zoom, and you can operate it blindfolded.

Watching VR content on TV also makes it easier to share. Small groups of people can navigate a virtual space together in comfort. One drawback: it’s fun to be the person “driving,” but abrupt movements can make everyone else a tad queazy.

What works in the living room might also work well at a desk – or in a meeting room. TVs are already replacing whiteboards and projection screens in many workplaces. And the central innovation of the fourth generation, Apple TV, the TV app, creates a marketplace to evolve new forms of group interaction. I expect there will be a whole class of enterprise TV apps someday.

For all these reasons, I have been pushing to create Apple TV app counterparts to the VR apps we are starting to build in the AppsLab. TV counterparts could make it easier to show prototypes in design meetings and customer demos. I feel validated by Tawny’s (@iheartthanniereport from GDC that Sony has adopted a similar philosophy.

Orqcle_AppleTV_VR

Screenshot from an early AppsLab Apple TV app

Thanks to one of our talented developers, Os (@vaini11a), we already have one such prototype. It doesn’t do much yet; we are just figuring out how to display desktop screens in a VR environment. With goggles on I can use the VR app to spin from screen to screen in my office chair and look down at my feet to change settings. With the Apple TV counterpart app, I can do exactly the same thing without moving anything other than my thumb.

It’s still too early to predict how ubiquitous VR might become in the workplace or how we will interact with it. But TV apps, or something like them, may become one way to view virtual worlds in comfort.Possibly Related Posts:

5 Reasons to Register for Modern Sales Experience

Linda Fishman Hoyle - Tue, 2016-03-22 23:09

A Guest Post by Michael Richter  (pictured left), Oracle Product Management

Spring is in the air, which means a few things—jacket weather, Spring cleaning, and conference season. While you are contemplating the conferences you’ll attend, we encourage you to consider an event that will tidy up your sales strategies for the year to come: The Modern Sales Experience.  Here are five reasons to attend the Modern Sales Experience in Las Vegas, April 26 – 28, 2016.

1. Sales Thought Leadership and Inspiring Speakers

The Modern Sales Experience features more than 25 sessions led by world-class industry analysts, successful global companies and Oracle thought leaders. Take a look at the conference program to search for an area of interest.

The conference is full of leading minds in digital and business. The Conference Keynote will be delivered by Siva Sundaresan, Group VP of Oracle Sales Applications on Tuesday at 3 p.m. in the MGM Grand Conference Center. On Wednesday at 9 a.m., don’t miss the Modern Experience Keynote presented by Mark Hurd, Oracle CEO, at the MGM Garden Arena. This is followed by the Sales Keynote with Ray Wang, Chairman and Founder of Constellation Research, at 11 a.m. in the MGM Grand Conference Center.

Additional general sessions and speakers for the conference include:

“Human Factors: Unleashing the Value of Technology” by Rebecca Wettemann, Vice President of Research, Nucleus Research

“The Quest for Optimal Sales Performance” by Mark Smith, CEO & EVP of Research, Ventana Research

“Revolutionizing Partner Engagement” by Jerri Paul, Business Architect Director Federal Business, Dell, Inc. & David Claflin, IT Management, Dell, Inc.

“Masterminding Your Technology Landscape” by Rhiannon Ainge, Business Systems & Marketing Manager, Panasonic Computer Products Europe

The Future of Customer Experience and Digitalization at Siemens" by Peter Gapp, Vice President CRM IT Transformation, Siemens A.G.

“The Future of Work” by UNLV Lee Business School Student Panel, moderated by Rebecca Wettemann, Nucleus Research

2. Product Experts

There are many ways to interact with partners and Oracle product experts to gain deeper insights and more intimate live discussions. You will find a number of smaller sessions focusing on Configure, Price, and Quote (CPQ) Cloud and Sales Cloud topics. In addition to customer-led sessions, Oracle product managers will lead roadmap and other product sessions. There’s also a CPQ Tech Track for Technical Administrators.

Sales session highlights:

  • Industry Solution Showcase for Sales, Scott Creighton, VP Oracle CX Cloud Industry Solutions, Oracle
  • Securing Data between Cloud & On-Premises for Optimal CX Solutions: David Canellos, SVP Advanced Technology, Blue Coat Systems

CPQ session highlights:

  • Opportunity to Cash: Fully Using CPQ in your Business: Jim Toporski, IT Manager & Caroline Gillis, CPQ Senior Manager, Panduit
  • How OneNeck used CPQ to support is Acquisition Strategy: John Hein, Director, Solution Sales Architecture, OneNeck
  • How FANUC America Simplified a Complex Product Quoting Process: Zach Heuer, GM, FANUC America
  • CPQ Tech Track for Technical Administrators

Birds of a Feather: You have the opportunity to meet face to face with Oracle product experts. Choose from five topics of interest at designated tables in the breakfast and lunch areas for the Birds of a Feather program. This is located in the Solution Center (MGM Grand Marquee Ballroom).

Theater Presentations in the Solutions Center: The Solution Center is where you will find Oracle Sales Consultants and partners. Visit a demonstration kiosk or a presentation at the Sales Solutions Showcase Theater. Find a topic of interest whether it’s sales force automation, sales performance management, partner relationship management, customer data management, pre-configured industry solutions, or sales analytics. It’s all there for the taking.

Some of the most important conversations happen in hallways and some of the most valuable relationships are built face to face. Modern Sales Experience has networking breaks designed to give you the chance to network with other Oracle Sales Cloud customers and Oracle leadership. Not to mention the Customer Appreciation Event at the MGM Grand Arena, but we’ll get into that later.

3. A Unified and Seamless CX Approach

In addition to sales sessions, you are encouraged to attend cross-experience sessions. These sessions will reveal how every experience with one’s brand matters and why it’s critical to strive for a unified and seamless CX approach. Sessions will cover topics ranging from industry solutions, CX Platform, CX suites and integrations, the CX Marketplace, social CRM, CX Cloud for Midsize, and Cloud user experience.

These sessions hit on the hottest trends and challenges customer experiences professionals are facing in 2016. Highlights include a session by Social CRM influencer Paul Greenberg on “Social: the Secret Weapon for Customer Experience Success.” Also Oracle Application and User Experience GVP Jeremy Ashley will lead a session focused on “Cloud User Experience: The Way Forward.” You can take a look at the full program and start planning their schedule now.

4. The Las Vegas Experience – Networking and Entertainment―Need we say more?

Cirque du Soleil themed Dinner & Party: It’s “Fire” at the Ling Ling Club at Hakkasan Nightclub in the MGM Grand for all Sales conference attendees. We recommend preparing for an amazing experience surrounded by performing artists, music, and great food on Tuesday from 7 – 9 p.m.

5. A Customer Appreciation Event for the Books

Because it wouldn’t be Vegas without a night of world-class entertainment, we’re excited to have booked Goo Goo Dolls to perform a private concert for attendees. Goo Goo Dolls will play in the MGM Grand Arena on Wednesday evening at 10:30 p.m. The Oracle Customer Appreciation Event is our way to say thanks—we want everyone to enjoy the night of food, drinks, and amazing entertainment.

Interested? See what attendees are already saying about Modern Sales Experience by joining the conversation on social. Use the hashtag #SalesX16.

If you are planning meetings with Oracle Partners while onsite, you can find the list of sponsors and exhibitors here. Thank you to our Silver Sponsor, Config Consultants!

Team OTN at CloudWorld DC, CloudWorld Developer Mumbai and Collaborate!

OTN TechBlog - Tue, 2016-03-22 13:37
Team OTN wanted to make sure you stop by to say hi at the next few events we'll be attending.  Look for us to help you get set up with an Oracle Could Trial Account,  watch our community managers conduct interviews with community experts, learn more about community.oracle.com and get the latest OTN T-shirt.   Register for one or all of the events listed below today!

CloudWorld OTN Developer Lab, DC - March 24th-
    •    Provision a new database in the cloud with Oracle Database Cloud Service (DBCS)
    •    Gain experience in building new applications for the Oracle Cloud Set up connectivity between the Compute tier and DBCS
    •    Learn about the REST APIs available to access the Oracle Cloud

CloudWorld Developer, Mumbai -April 8th

Join us to learn how you can leverage Oracle Cloud Platform technologies for the complete development lifecycle! Explore Oracle PaaS that enables you to deliver better quality code with the agility you need to meet today's IT challenges.

Collaborate 2016, Las Vegas - April 10th to 14th

This conference empowers users of Oracle business applications and database software to gain greater value from their Oracle investments through real-world education and networking. Don't miss IOUG Pre-Conference Workshop: Cloud Attack that is being facilitated by some of our Oracle ACE's

A Tale of Three Cities: Perspectives on innovation from New York, San Francisco and Sydney

Pythian Group - Tue, 2016-03-22 12:29

Recently, Pythian hosted a number of Velocity of Innovation (Velocity) events. I moderated two of these: one last June in New York, and one in November in San Francisco. Another event in Sydney, Australia was moderated by Tom McCann, senior customer experience analyst with Forrester.

Our Velocity events have given us unique insights into what IT professionals in various regions see as their top priorities or concerns. And although we always framed our discussions with similar questions, it was interesting to see the different directions they took in each location — especially when it came to the topic of innovation.

So what makes a particular region fertile ground for innovation? And can you measure it?

The Global Innovation Index (GII) ranks countries based on a multitude of indicators of innovation. The United States ranks number 2 on the GII, behind Switzerland, while Australia is number 17, out of 141 countries. According to the GII website, the index aims to capture the multi-dimensional facets of innovation and provide the tools to assist in tailoring policies to promote long-term output growth, improved productivity and job growth.

The ideas discussed in the US and Australian locations seemed to align with the GII results, with US panelists expressing more positive attitudes and concrete ideas on how companies can improve agility and become more innovative. And while being at the forefront of technology in the Asia-Pacific region, the Australian panelists and audience members described more cautious approaches to achieving innovation.

Sydney: Cautiously moving forward

Early in the Sydney panel discussion, Chris Mendez, executive consultant big data and analytics from Industrie IT, sparked a lively discussion about innovation by asserting that innovation is lacking in that region.

“I actually don’t think there’s enough innovation in Australia, in particular. There’s a lot of talk about it, people are doing a lot of experiments, and there are some companies who’ve set up business purely based on tool sets that use data to innovate. But there are a few things that seem to be working against innovation, and I think one of those things is that it doesn’t stand on its own,” Mendez said.

According to Francisco Alvarez, vice president, APAC at Pythian, the risks associated with innovation might be holding companies back in Australia. “The main problem for most companies is that innovation equals risk,” Alvarez said.

Alvarez also commented on what it takes to make innovation work. “If you take a step back and look at the companies that are doing well in the market, you can see that there is one factor that differentiates them: they were not afraid to try to innovate. And because of that innovation they are getting their share of the market and gaining ground. Just look at the financial market. CBA was considered crazy a few years ago for all the investment they were making in technology, social media, apps and so on. They got ahead. And now everybody is trying to do the same,” he said.

Mendez thinks that innovation needs to start from the top. “I think there’s also a very big misunderstanding at board levels about innovation because boards are there to actually stop you changing your business. The fundamental tenant is: ‘We’ve got a great business model here, it’s running well, we’ve got to make sure that any change to it doesn’t damage that.’ There’s a natural caution at board levels and it’s totally understandable,” he said.

While cautious, the Sydney panelists expressed that they thought there is hope for more innovation in the future. They expressed a need to proceed slowly, watching what works for innovation leaders.

“The key is to have a balance,” Alvarez said.

New York: Early adopters

If you were to put our New York panelists on Geoffrey Moore’s https://en.wikipedia.org/wiki/Geoffrey_Moore Technology Adoption Lifecycle, you might classify them as early adopters, rather than true innovators. Not surprising, since New York’s competitive industries such as banking and publishing rely on innovative technologies, but they don’t create them.

According to New York panelist, Forrester Analyst Gene Leganza, what makes an enterprise agile is the ability to sense what’s going on in the marketplace and to quickly respond to it. But, he said that innovation comes at a cost. “The flip side of agility is innovation. An interesting aspect of innovation is getting really hot talent into your environment. Getting the right talent and doing smart things and being leading edge are challenges. You have to figure out what level to drop in on, where you are in the industry. You need to determine if you are a startup or a state organization that needs to be a fast follower,” Leganza said.

Otto Toth, CTO at Huffington Post warned that innovating quickly is not always in the best interest of the business, or it may not be the way to do it properly. He asserted that quick innovation can actually work against the business, and that instead of making your business faster, being very agile can slow everything down.

“Too many decision-makers just slow down the process. It’s better to have a few people or a core team who make the decisions and come up with new features,” he added.

Leganza went on to describe what it takes at various levels of the organization. He said that there’s a notion at the engineer level that agility means bureaucracy won’t get in their way. Then there’s agility at the enterprise level, which is about reducing risk and understanding how soon change can be in production.

“The higher up you go, the more people are going to be receptive to what improves the whole portfolio rather than one project. This is where architects come in. They have been hands-on, but have the credibility and knowledge to guide the organization more strategically,” Leganza said.

San Francisco: The innovators

In San Francisco the narratives on innovation were quite different. Although cities don’t have their own GII ranking, you might assume that the West Coast IT leaders are the innovators. And judging by the discussion at the San Francisco event, this assumption seemed to be true.

Cory Isaacson, CTO at RMS was one of our San Francisco panelists. His company runs catastrophe models for some of the world’s largest insurance companies, like scenarios that will tell what a disaster like an earthquake or hurricane might cost them. Isaacson has been working on bringing big data and scalable systems together to create a new cloud-based platform.

“At my company some of the things that we’re trying to do are, honestly, more advanced than most other things I’ve ever seen in my career. But when you’re doing innovation, it is risky. There’s no way around it. There is a lot to evaluate: from different algorithms to the risk models and the catastrophe models,” said Isaacson.

Sean Rich, director of IT at Mozilla added to the San Francisco discussion by talking about some of the concrete innovations his company is working on. They’re taking a partnership approach to enable agility.

“Innovation is doing something new. In an effort toward achieving agility, one of the things that we’re doing is enabling the agility of our business partners, by changing our own operating model. Instead of traditional IT where we run all the services and infrastructure necessary to drive the business, we’re taking more of an enabler or partnership approach,” Rich said.

“We’re now doing things like encouraging shadow IT, encouraging the use of SaaS applications and helping them really do that better through different service offerings like vendor management or change management of user adoption for certain platforms and data integration” he added.

“Overall, we’re looking at ourselves differently, and asking what new capabilities we need to develop, and what processes, tools and skills we need to enable agility for our marketing group or our product lines, as an example,” Rich said.

Aaron Lee, the Chief Data Officer at Pythian, runs a team that specializes in helping clients harness technology to deliver real outcomes. Usually they involve things like big data, DevOps, cloud, advanced analytics — he’s involved in some of the most leading edge initiatives for Pythian customers. He takes a practical approach to innovation with clients, and said that companies could improve innovation by looking at the root of the motivation for it.

“They need to ask: Why are we going down this path, trying to innovate something and what is the value of that thing we’re trying to innovate?

“If the shared goals around innovation opportunities aren’t defined in a way that actually lead to success over time, then the business is just like any other organism: it starts to get more risk averse. Then it becomes harder and harder to execute any kind of change agenda. Planning in a way that is likely to have a good long-term outcome, even at the outset of any sort of initiative, is one key success criteria that we put in place to help ourselves and our customers get to a good place,” Lee said.

Isaacson added that companies like Google have been known to allow an engineer to take a day a week or a day every two weeks to just look at things. “I think though, the challenge is you have to get your organization up to the point where this is an economically viable thing to do. Once we get more ahead of the curve, I think we could do that kind of thing,” he said.

Interested in being a part of a discussion like these? VELOCITY OF INNOVATION is a series of thought-leadership events for senior IT management hosted by Pythian. Pythian invites leading IT innovators to participate in discussions about today’s disruptive technologies: big data, cloud, advanced analytics, DevOps, and more. These events are by invitation only.

If you are interested in attending an upcoming Velocity of Innovation event in a city near you, please contact events@pythian.com. To view our schedule of upcoming events visit our Velocity of Innovation page.

Categories: DBA Blogs

accessing oracle cloud storage from command line

Pat Shuff - Tue, 2016-03-22 11:00
Note for the purposes of this blog entry, the world "c url" should be interpreted as one word and not two. Unfortunately, the blog editing software that we have prohibits this work and kicks the blog entry out if it is placed in the blog without the space. Can everyone say a collective "Good Grief" and move on. Unfortunately, you will need to delete the space to make everything work properly.

Now that we have the cost and use out of the way, let's talk about how to consume these services. Unfortunately, consuming raw blocks, either tape or spinning disk, is difficult in the cloud. Amazon offers you an S3 interface and exposes the cloud services as an iSCSi interface through a downloadable object or via REST api services. Azure offers something similar with REST api services but offers SMB downloadable objects to access the cloud storage. Oracle offers REST api services but offers NFS downloadable objects to access the cloud storage. Let's look at three different ways of consuming the Oracle Cloud services.

The first way is to use the rest API. You can consume the services by accessing the client libraries using Postman from Chrome or RESTClient from Firefox. You can also access the service from the c url command line.

c url -v -X GET -H "X-Storage-User: Storage-metcsgse00026:cloud.admin" -H "X-Storage-Pass: $OPASS" https://metcsgse00026.storage.oraclecloud.com/auth/v1.0

In this example we are connecting to the identity domain metcsgse00026. The username that we are using is cloud.admin. We store the password in an environment variable OPASS and pull in the password when we execute the c url command. On Linux or a Mac, this is done from the pre-installed c url command. On Windows we had to install cygwin-64 to get the c url command working. When we execute this c url command we get back and AUTH header that can be passed in to the cloud service to create and consume storage services. In our example above we received back X-Auth-Token: AUTH_tk928cf3e4d59ddaa1c0a02a66e8078008 which is valid for 30 minutes. The next step would be to create a storage container

c url -v -s -X PUT -H "X-Auth-Token: AUTH_tk928cf3e4d59ddaa1c0a02a66e8078008" https://storage.us2.oraclecloud.com/v1/Storage- metcsgse00026/myFirstContainer

This will create myFirstContainer and allow us to store data either with more REST api commands or tools like CloudBerry or NFS. More information about how to use the REST api services can be found in an online tutorial

The second way of accessing the storage services is through a program tool that takes file requests on Windows and translates them to REST api commands on the cloud storage. CloudBerry has an explorer that allows us to do this. The user interface looks like and is setup with the File -> Edit or New Accounts menu item. You need to fill out the access to look like . Note that the username is a combination of the identity domain (metcsgse00026) and the username (cloud.admin). We could do something similar with PostMan or RESTClient extensions to browsers. Internet Explorer does not have plug ins that allow for REST api calls.

The third, and final way to access the storage services is through NFS. Unfortunately, Windows does not offer NFS client software on desktop machines so it is a little difficult to show this as a consumable service. Mac and Linux offer these services as mounting an nfs server as a network mount. Oracle currently does not offer SMB file shares to their cloud services but it is on the roadmap in the future. We will not dive deep into the Oracle Storage Cloud Appliance in this blog because it gets a little complex with setting up a VM and installing the appliance software. The documentation for this serviceM is a good place to start.

In summary, there are a variety of ways to consume storage services from Oracle. They are typically program interfaces and not file interfaces. The service is cost advantageous when compared to purchasing spinning disks from companies like Oracle, NetApp, or EMC. Using the storage appliance gets rid of the latency issues that you typically face and difficulty in accessing data from a user perspective. Overall, this service provides higher reliability than on-premise storage, lower cost, and less administration overhead.

I Am Speaking at OTN Yathra 2016

Oracle in Action - Tue, 2016-03-22 09:19

RSS content

The Oracle ACE directors and Oracle Volunteers  in the region are organizing their third evangelist event called ‘OTNYathra 2016’ from 23rd  April 2016 to 1st May 2016.  This yathra or tour will a series of 6 conferences across 6 major cities (Chennai, Bangalore, Hyderabad,  Pune, Mumbai and Delhi) managed by ACE directors and Oracle Volunteers of the region.

I will be speaking at this year’s OTNYathra  about Oracle Database 12c new feature : Highly Available NFS (HANFS) over ACFS.

HANFS over ACFS enables highly available NFS servers to be configured using Oracle ACFS clusters. The NFS exports are exposed through Highly Available VIPs (HAVIPs), and this allows Oracle’s Clusterware agents to ensure that HAVIPs and NFS exports are always available. If the node hosting the export(s) fails, the corresponding HAVIP and hence its corresponding NFS export(s) will automatically fail over to one of the surviving nodes so that the NFS client continues to receive uninterrupted service of NFS exported paths.

My session will be held on Sunday 1st May, 2016   from 3:00pm to 3:50pm in
Room 1, BirlaSoft, H–9, Sector 63, NOIDA – 201306, NCR Delhi
Hope to meet you there!!



Tags:  

Del.icio.us
Digg

Copyright © ORACLE IN ACTION [I Am Speaking at OTN Yathra 2016], All Right Reserved. 2016.

The post I Am Speaking at OTN Yathra 2016 appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

TekTalk Webinar: 3 Immediate Use Cases for Oracle PaaS

WebCenter Team - Tue, 2016-03-22 08:09


3 Immediate Use Cases for Oracle Platform as a Service
Human Resources * Legal * Field Services
Thursday, March 24, 2016 | 1 PM EST / 10 AM PST

Oracle Cloud Platform meets the unique needs of developers, IT professionals, and business users with a comprehensive, integrated portfolio of platform services that enables them to innovate faster, increase productivity, and lower costs. Customers can use Oracle Cloud Platform to integrate existing IT with next-generation cloud services, accelerate application development and deployment, and lead business transformation.

Innovate Quickly and Confidently with Platform as a Service.

Platform as a Service solutions can help you:
  • Accelerate business innovation
  • Improve business agility and insight
  • Reduce IT cost and complexity
  • Increase productivity and collaboration
During this 30 minute webinar, Troy Allen will provide a demonstration of how TekStream Solutions uses Oracle Cloud Platform to address specific business needs for modern organizations. We'll also present specific use cases for Human Resources, Legal and Field Services teams. Register today!

Unable to logon to BPM Workspace

Darwin IT - Tue, 2016-03-22 04:30
Yesterday I tried to test a demo bpm process with a few tasks. But I couldn't logon to the workspace. I couln't find an error, except for:

<[ServletContext@452818297[app:OracleBPMWorkspace module:/bpm/workspace path:null spec-version:3.1]] Servlet failed with an Exception java.lang.IllegalStateException: Response already committed

I tried several optional solutions, like setting the listen-address, that did not work. What solved the issue was setting the ServerURL in the System MBean browser of the soa-infra.
To do so in BPM12cR2, go to the Enterprise Manager, (eg. http://darlin-vce-db:7001/em as it is in my VM) and go to the soa-infra:
Then in the SOA Infrastructure -> Administration menu open the System MBean Browser:
In the System MBean Browser look for the Application Defined MBeans and expand it:
Within the Application Defined MBeans, expand 'oracle.as.soainfra.config', then your server  and then 'SoaInfraConfig' and click on soa-infra:
Find the attribute ServerURL and edit it to the host:port of your soa-server, including the 'http://' protocol, eg. 'http://darlin-vce-db:7005':

Don't forget to hit the enter key and click on Apply:


Restart your server and it should be good to go.

GDC 2016 – Part 1: Event and Impression

Oracle AppsLab - Tue, 2016-03-22 03:34

Tawny (@iheartthannie) and I attended the 30th Edition of GDC – Game Developers Conference. As shown in the Tawny’s daily posts, there were lots of fun events, engaging demos, and interesting sessions, that we simply could not cover them all. With 10 to 30 sessions going on at any time slots, I wished to have multiple “virtual mes” to attend some of them simultaneously. However, with only one “real me,” I still managed to attend a large number of sessions, mostly 30-minute sessions to cover more topics at a faster pace.

Game Developers Conference 2016

GDC 2016

Unlike Tawny’s posts that give you in-depth looks into many of the sessions, I will try to summarize the information and take-aways in two posts: Part 1 – Event and Impression; Part 2 – The State of VR. This post will cover event overview and general impression.

1. Flash Backward

Flash Backward - 30 Years of Making Games

Flash Backward – 30 Years of Making Games

After two days of VR sessions, this flashback kicked off the GDC Game portion with a sense of nostalgia, flashing games like Pac-Man and Minesweeper, evolving into console games, massive multi-player games, social games (FarmVille), mobile games (Angry Birds), and onto VR games.

GDC has been running for 30 years, and many of the attendants were not even born yet that time. The Flashback started with Chris Crawford, the founder of GDC, and concluded with Palmer Luckey, the Oculus dude, who is 23, with not much for flashback, but only looking forward to the new generation of games in VR. He will be back in 20 years for the retrospective

Video : The MERGE Statement

Tim Hall - Tue, 2016-03-22 02:28

After a what seems like an eternity of being ill and having a dodgy throat, followed quickly by a couple of conferences, I’ve finally got back on the horse and recorded another video.

I was explaining a specific aspect of the MERGE statement to one of my colleagues and while I was doing it I was thinking, “Have I done a video on MERGE yet?” Now I have.

The cameo for this video is Cary Millsap. If you watch the out-takes at the end you will see the level of respect and trust I have garnered in the community. The words confused and suspicious spring to mind! :)

An honourable mention goes out to James Morle for videobombing. :)

Cheers

Tim…

Video : The MERGE Statement was first posted on March 22, 2016 at 8:28 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

IBM Bluemix NodeRed Demo using Twitter and Telstra SMS API

Pas Apicella - Mon, 2016-03-21 21:48
In this example I integrate Twitter Feed with Telstra SMS API to send an SMS based. It is based on the wrapper application which exposes the Telstra SMS API on Bluemix as per a previous Post.

http://theblasfrompas.blogspot.com.au/2015/08/integrating-telstra-public-sms-api-into.html

It is assumed you have NodeRed NodeJS application running on Bluemix and are at the Editor as shown below.



Steps

1. Drag a "Social -> Twitter" node onto the editor
2. Double click on the Node and ensure you add your twitter credentials and authorize twitter to work with NodeRed itself. Also use a FOR TAG you wish to receive as part of the feed from twitter, in this demo it is "#telstrasmaapi-pas"



3. Once done it will look as follows



4. Drag a "Function -> HTTP Request" onto the editor
5. Double click on the HTTP Request item and add details as shown below.

Method = POST
URL = http://pas-telstrasmsapi.mybluemix.net/telstrasms?to=0411151350&body=tweet sent about telstra SMS API
Name = Any name of your choice

Note: Ensure URL is changed to the mobile number you wish to use and a BODY you wish to send as part of the TEXT.


6. Connect the twitter node to the HTTP Request node as shown below.



7. Click the "Deploy" button
8. Now log into your Twitter account and send a Tweet using the TAG you identified above as shown below. You must use the TAG name you said your looking for in this case "#telstrasmaapi-pas"


9. It should then send an SMS to the identified mobile number you used (Australia Mobiles Only) as shown below.



More Information

For more information on NodeRed use the link below.

http://nodered.org/



Categories: Fusion Middleware

Calling REST Services from Application Builder Cloud Service

Shay Shmeltzer - Mon, 2016-03-21 16:33

One of the frequent requests we get when we demo ABCS is - can I invoke some external functionality that is exposed as a REST service and pass parameters to it.

Well, with a minimal amount of JavaScript coding you can do it in the current version. 

I recorded the demo below that shows you how to do that.

I'm leveraging a public REST API that github exposes to get a list of repositories for a user. The service is available at https://api.github.com/users/oracle/repos

I then design an ABCS page that has a parameter field, a button that invokes the REST/JSON call, and a placeholder for results. It looks like this: 

In addition the video also shows some other techniques that are useful, including:

  • How to create a new blank data entry page
  • How to add custom component that renders HTML content
  • How to add a button that calls a REST service
  • How to pass a parameter to the JavaScript custom code
  • How to set a page to be the default page of the app
  • How to stage your application for external testing

&amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;quot;&amp;amp;amp;amp;gt;&amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;gt;

It seems that right now you are restricted to accessing REST services that are secured over HTTPS protocol (which is a good thing).

Note that you of course don't have to stage the app to see it run, you can just go into live mode, or run it to see it working. I just wanted to make sure I have a demo out there that shows how staging works.

The JavaScript snippet I'm using in the video is: 

$.getJSON("https://api.github.com/users/"+ +"/repos", function(result){

$.each(result, function(i, field){

$('[name="myOutput"]').append(field.name + " ");

});

        });

resolve(); 

If you'll actually add a

$('[name="results"]').empty(); 

as the first link, it will clear the field for you each time you re-press the button. 

Categories: Development

New Content on Our Oracle.com Page

Oracle AppsLab - Mon, 2016-03-21 16:18

Back in September, our little team got a big boost when we launched official content under the official Oracle.com banner.

I’ve been doing this job for various different organizations at Oracle for nine years now, and we’ve always existed on the fringe. So, having our own home for content within the Oracle.com world is a major deal, further underlining Oracle’s increased investment in and emphasis on innovation.

Today, I’m excited to launch new content in that space, which, for the record is here:

www.oracle.com/webfolder/ux/applications/successStories/emergingTech.html

We have a friendly, short URL too:

tinyurl.com/appslab

The new content focuses on the methodologies we use for research, design and development. So you can read about why we investigate emerging technologies and the strategy we employ, and then find out how we go about executing that strategy, which can be difficult for emerging technologies.

Sometimes, there are no users yet, making standard research tacits a challenge. Equally challenging is designing an experience from scratch for those non-existent users. And finally, building something quickly requires agility, lots of iterations and practice.

All-in-all, I’m very happy with the content, and I hope you find it interesting.

Not randomly, here are pictures of Noel (@noelportugal) showing the Smart Office in Australia last month.

RS3660_ORACLE 332

RS3652_ORACLE 419

The IoT Smart Office, just happens to be the first project we undertook as an expanded team in late 2014, and we’re all very pleased with the results of our blended, research, design and development team.

I hope you agree.

Big thanks to the writers, Ben, John, Julia, Mark (@mvilroxk) and Thao (@thaobnguyen) and to Kathy (@klbmiedema) and Sarahi (@sarahimireles) for editing and posting the content.

In the coming months, we’ll be adding more content to that space so stay tuned.Possibly Related Posts:

Apache Cassandra 2.1 Incremental Repair

Pythian Group - Mon, 2016-03-21 15:05

The “incremental repair” feature has been around since Cassandra’s 2.1. Conceptually the idea behind incremental repair is straightforward, but it can get complicated. The official Datastax document describes the procedure for migrating to incremental repair, but in my opinion, it doesn’t give a full picture. This post aims to fill in this gap by summarizing and consolidating the information of Cassandra incremental repair.

Note: this post assumes the reader has a basic understanding of Apache Cassandra, especially the “repair” concept within Cassandra.

 

1. Introduction

The idea of incremental repair is to mark SSTables that are already repaired with a flag (a timestamp called repairedAt indicating when it was repaired) and when the next run of repair operation begins, only previously unrepaired SSTables are scanned for repair. The goal of an “incremental repair” is two-fold:

1) It aims to reduce the big expense that is involved in a repair operation that sets out to calculate the “merkle tree” on all SSTables of a node;

2) It also makes repair network efficient because only rows that are marked as “inconsistent” will be sent across the network.

2. Impact on Compaction

“Incremental repair” relies on an operation called anticompaction to fulfill its purpose. Basically, anticompaction means splitting an SSTable into two: one contains repaired data and the other contains non-repaired data. With the separation of the two sets of SSTables, the compaction strategy used by Cassandra also needs to be adjusted accordingly. This is because we cannot merge/compact a repaired SSTable with an unrepaired SSTable together. Otherwise, we lose the repaired states.

Please note that when an SSTable is fully covered by a repaired range, no anticompaction will occur. It will just rewrite the repairedAt field in SSTable metadata.

SizeTiered compaction strategy takes a simple strategy. Size-Tiered compaction is executed independently on the two sets of SSTables (repaired and unrepaired), as the result of incremental repair Anticompaction operation.

For Leveled compaction strategy, leveled compaction is executed as usual on repaired set of SSTables, but for unrepaired set of SSTables, SizeTiered compaction will be executed.

For DateTiered compaction strategy, “incremental repair” should NOT be used.

3. Migrating to Incremental Repair

By default, “nodetool repair” of Cassandra 2.1 does a full, sequential repair. We can use “nodetool repair” with “-inc” option to enable incremental repair.

For Leveled compaction strategy, incremental repair actually changes the compaction strategy to SizeTiered compaction strategy for unrepaired SSTables. If a nodetool repair is executed for the first time on Leveled compaction strategy, it will do SizeTiered compaction on all SSTables because until the first incremental repair is done, Cassandra doesn’t know the repaired states. This is a very expensive operation and it is therefore recommended to migrate to incremental repair one node at a time, and follow the following procedure to migrate to incremental repair:

  1. Disable compaction on the node using nodetool disableautocompaction
  2. Run the default full, sequential repair.
  3. Stop the node.
  4. Use the tool sstablerepairedset to mark all the SSTables that were created before you disabled compaction.
  5. Restart cassandra
3.1 Tools for managing SSTable repaired/unrepaired state

Cassandra offers two utilities for SSTable repaired/unrepaired state management:

  • sstablemetadata is used to check repaired/unrepaired state of an SSTable. The syntax is as below:

             sstablemetadata <sstable filenames>

  • sstablerepairedset is used to manually mark if an SSTable is repaired or unrepaired. The syntax is as below. Note that this tool has to be used when Cassandra is stopped.

             sstablerepairedset [–is-repaired | –is-unrepaired] [-f <sstable-list> | <sstables>]

Please note that with utility sstablerepairedset, you can also stop incremental repair on Leveled compaction and restore the data to be leveled again with the “—is-unrepaired” option. Similarly, the node needs to be stopped first.

4. Other Considerations with Incremental Repair

There are some other things to consider when using incremental repair.

  • For Leveled compaction, once an incremental repair is used, it should be done so continuously. Otherwise, only SizeTiered compaction will be executed. It is recommended to run incremental repair daily and run full repairs weekly to monthly.
  • Recovering from missing data or corrupted SSTables require a non-incremental full repair.
  • “nodetool repair” –local option should be only used with full repair, not with incremental repair.
  • In C* 2.1, sequential repair and incremental repair does NOT work together.
  • With SSTable’s repaired states being tracked via it’s metadata, some Cassandra tools can impact the repaired states:
    1. Bulk loading will make loaded SSTables unrepaired, even if was repaired in a different cluster.
    2. If scrubbing causes dropped rows, new SSTables will be marked as unrepaired. Otherwise, SSTables will keep their original repaired state.
Categories: DBA Blogs

accessing cloud storage

Pat Shuff - Mon, 2016-03-21 15:02
Oracle cloud storage is not the first product that performs basic block storage in the cloud. The name is a little confusing as well. When you think of cloud storage, the first thing that you think of is dropbox, box.com, google docs, or some other file storage service. Oracle Cloud Storage is a different kind of storage. This storage is more like Amazon S3 storage and less like file storage in that it provides the storage foundation for other services like compute, backup, or database. If you are looking for file storage you need to look Document Cloud Storage Services which is more tied to processes and less tied to raw cloud storage. In this blog we will look at different ways of attaching to block storage in the cloud and look at the different ways of creating and consuming services. To start off with, there are two ways to consume storage in the Oracle Cloud, metered and un-metered. Metered is charged on a per-hourly/monthly basis and you pay for what you consume. If you plan on starting with 1 TB and growing to 120 TB over a 12 month period, you will pay on average for 60 TB over the year. If you consume this same service as an un-metered service you will pay for 120 TB of storage for 12 months since you eventually cross the 1 TB boundary some time over the year. With the metered services you also pay for the data that you pull back across the internet to your computer or data center but not the initial load of data to the Oracle Cloud. This differs from Amazon and other cloud services that charge both for upload and download of data. If you consume the resources in the Oracle Cloud by other cloud services like compute or database in the same data center, there is no charge for reading the data from the cloud storage. For example, if I use a backup software package to copy operating system or database backups to the Oracle Cloud Storage and restore these services into compute servers in the Oracle Cloud, there is no charge for restoring the data to the compute or database servers.

To calculate the cost of cloud storage from Oracle, look at the pricing information on the cloud web page. for metered pricing and for un-metered pricing.

If we do a quick calculation of the pricing for our example previously where we start with 1 TB and grow to 120 TB over a year we can see the price difference between the two solutions but also note how much reading back will eventually cost. This is something that Amazon hides when you purchase their services because you get charged for the upload and the download. for un-metered pricing and for metered pricing. Looking at this example we see that 120 TB of storage will cost us $43K per year with un-metered services but $36K per year for metered services assuming a 20% reading of the data once it is uploaded. If the read back number doubles, so does the cost and the price jumps to $50K. If we compare this cost to a $3K-$4K/TB cost of on-site storage, we are looking at $360K-$480K plus $40K-$50K in annual maintenance. It turns out it is significantly cheaper to grow storage into the cloud rather than purchasing a rack of disks and running them in your own data center.

The second way to consume storage cloud services is by using tape in the cloud rather than spinning disk in the cloud. Spinning disk on average costs $30/TB/month whereas tape averages $1/TB/month. Tape is not offered in an un-metered service so you do need to look at how much you read back because there is a charge of $5/TB to read the data back. This compares to $7/TB/month with Amazon plus the $5/TB upload and download charges.

Pages

Subscribe to Oracle FAQ aggregator