Feed aggregator

A Tale of Three Cities: Perspectives on innovation from New York, San Francisco and Sydney

Pythian Group - Tue, 2016-03-22 12:29

Recently, Pythian hosted a number of Velocity of Innovation (Velocity) events. I moderated two of these: one last June in New York, and one in November in San Francisco. Another event in Sydney, Australia was moderated by Tom McCann, senior customer experience analyst with Forrester.

Our Velocity events have given us unique insights into what IT professionals in various regions see as their top priorities or concerns. And although we always framed our discussions with similar questions, it was interesting to see the different directions they took in each location — especially when it came to the topic of innovation.

So what makes a particular region fertile ground for innovation? And can you measure it?

The Global Innovation Index (GII) ranks countries based on a multitude of indicators of innovation. The United States ranks number 2 on the GII, behind Switzerland, while Australia is number 17, out of 141 countries. According to the GII website, the index aims to capture the multi-dimensional facets of innovation and provide the tools to assist in tailoring policies to promote long-term output growth, improved productivity and job growth.

The ideas discussed in the US and Australian locations seemed to align with the GII results, with US panelists expressing more positive attitudes and concrete ideas on how companies can improve agility and become more innovative. And while being at the forefront of technology in the Asia-Pacific region, the Australian panelists and audience members described more cautious approaches to achieving innovation.

Sydney: Cautiously moving forward

Early in the Sydney panel discussion, Chris Mendez, executive consultant big data and analytics from Industrie IT, sparked a lively discussion about innovation by asserting that innovation is lacking in that region.

“I actually don’t think there’s enough innovation in Australia, in particular. There’s a lot of talk about it, people are doing a lot of experiments, and there are some companies who’ve set up business purely based on tool sets that use data to innovate. But there are a few things that seem to be working against innovation, and I think one of those things is that it doesn’t stand on its own,” Mendez said.

According to Francisco Alvarez, vice president, APAC at Pythian, the risks associated with innovation might be holding companies back in Australia. “The main problem for most companies is that innovation equals risk,” Alvarez said.

Alvarez also commented on what it takes to make innovation work. “If you take a step back and look at the companies that are doing well in the market, you can see that there is one factor that differentiates them: they were not afraid to try to innovate. And because of that innovation they are getting their share of the market and gaining ground. Just look at the financial market. CBA was considered crazy a few years ago for all the investment they were making in technology, social media, apps and so on. They got ahead. And now everybody is trying to do the same,” he said.

Mendez thinks that innovation needs to start from the top. “I think there’s also a very big misunderstanding at board levels about innovation because boards are there to actually stop you changing your business. The fundamental tenant is: ‘We’ve got a great business model here, it’s running well, we’ve got to make sure that any change to it doesn’t damage that.’ There’s a natural caution at board levels and it’s totally understandable,” he said.

While cautious, the Sydney panelists expressed that they thought there is hope for more innovation in the future. They expressed a need to proceed slowly, watching what works for innovation leaders.

“The key is to have a balance,” Alvarez said.

New York: Early adopters

If you were to put our New York panelists on Geoffrey Moore’s https://en.wikipedia.org/wiki/Geoffrey_Moore Technology Adoption Lifecycle, you might classify them as early adopters, rather than true innovators. Not surprising, since New York’s competitive industries such as banking and publishing rely on innovative technologies, but they don’t create them.

According to New York panelist, Forrester Analyst Gene Leganza, what makes an enterprise agile is the ability to sense what’s going on in the marketplace and to quickly respond to it. But, he said that innovation comes at a cost. “The flip side of agility is innovation. An interesting aspect of innovation is getting really hot talent into your environment. Getting the right talent and doing smart things and being leading edge are challenges. You have to figure out what level to drop in on, where you are in the industry. You need to determine if you are a startup or a state organization that needs to be a fast follower,” Leganza said.

Otto Toth, CTO at Huffington Post warned that innovating quickly is not always in the best interest of the business, or it may not be the way to do it properly. He asserted that quick innovation can actually work against the business, and that instead of making your business faster, being very agile can slow everything down.

“Too many decision-makers just slow down the process. It’s better to have a few people or a core team who make the decisions and come up with new features,” he added.

Leganza went on to describe what it takes at various levels of the organization. He said that there’s a notion at the engineer level that agility means bureaucracy won’t get in their way. Then there’s agility at the enterprise level, which is about reducing risk and understanding how soon change can be in production.

“The higher up you go, the more people are going to be receptive to what improves the whole portfolio rather than one project. This is where architects come in. They have been hands-on, but have the credibility and knowledge to guide the organization more strategically,” Leganza said.

San Francisco: The innovators

In San Francisco the narratives on innovation were quite different. Although cities don’t have their own GII ranking, you might assume that the West Coast IT leaders are the innovators. And judging by the discussion at the San Francisco event, this assumption seemed to be true.

Cory Isaacson, CTO at RMS was one of our San Francisco panelists. His company runs catastrophe models for some of the world’s largest insurance companies, like scenarios that will tell what a disaster like an earthquake or hurricane might cost them. Isaacson has been working on bringing big data and scalable systems together to create a new cloud-based platform.

“At my company some of the things that we’re trying to do are, honestly, more advanced than most other things I’ve ever seen in my career. But when you’re doing innovation, it is risky. There’s no way around it. There is a lot to evaluate: from different algorithms to the risk models and the catastrophe models,” said Isaacson.

Sean Rich, director of IT at Mozilla added to the San Francisco discussion by talking about some of the concrete innovations his company is working on. They’re taking a partnership approach to enable agility.

“Innovation is doing something new. In an effort toward achieving agility, one of the things that we’re doing is enabling the agility of our business partners, by changing our own operating model. Instead of traditional IT where we run all the services and infrastructure necessary to drive the business, we’re taking more of an enabler or partnership approach,” Rich said.

“We’re now doing things like encouraging shadow IT, encouraging the use of SaaS applications and helping them really do that better through different service offerings like vendor management or change management of user adoption for certain platforms and data integration” he added.

“Overall, we’re looking at ourselves differently, and asking what new capabilities we need to develop, and what processes, tools and skills we need to enable agility for our marketing group or our product lines, as an example,” Rich said.

Aaron Lee, the Chief Data Officer at Pythian, runs a team that specializes in helping clients harness technology to deliver real outcomes. Usually they involve things like big data, DevOps, cloud, advanced analytics — he’s involved in some of the most leading edge initiatives for Pythian customers. He takes a practical approach to innovation with clients, and said that companies could improve innovation by looking at the root of the motivation for it.

“They need to ask: Why are we going down this path, trying to innovate something and what is the value of that thing we’re trying to innovate?

“If the shared goals around innovation opportunities aren’t defined in a way that actually lead to success over time, then the business is just like any other organism: it starts to get more risk averse. Then it becomes harder and harder to execute any kind of change agenda. Planning in a way that is likely to have a good long-term outcome, even at the outset of any sort of initiative, is one key success criteria that we put in place to help ourselves and our customers get to a good place,” Lee said.

Isaacson added that companies like Google have been known to allow an engineer to take a day a week or a day every two weeks to just look at things. “I think though, the challenge is you have to get your organization up to the point where this is an economically viable thing to do. Once we get more ahead of the curve, I think we could do that kind of thing,” he said.

Interested in being a part of a discussion like these? VELOCITY OF INNOVATION is a series of thought-leadership events for senior IT management hosted by Pythian. Pythian invites leading IT innovators to participate in discussions about today’s disruptive technologies: big data, cloud, advanced analytics, DevOps, and more. These events are by invitation only.

If you are interested in attending an upcoming Velocity of Innovation event in a city near you, please contact events@pythian.com. To view our schedule of upcoming events visit our Velocity of Innovation page.

Categories: DBA Blogs

accessing oracle cloud storage from command line

Pat Shuff - Tue, 2016-03-22 11:00
Note for the purposes of this blog entry, the world "c url" should be interpreted as one word and not two. Unfortunately, the blog editing software that we have prohibits this work and kicks the blog entry out if it is placed in the blog without the space. Can everyone say a collective "Good Grief" and move on. Unfortunately, you will need to delete the space to make everything work properly.

Now that we have the cost and use out of the way, let's talk about how to consume these services. Unfortunately, consuming raw blocks, either tape or spinning disk, is difficult in the cloud. Amazon offers you an S3 interface and exposes the cloud services as an iSCSi interface through a downloadable object or via REST api services. Azure offers something similar with REST api services but offers SMB downloadable objects to access the cloud storage. Oracle offers REST api services but offers NFS downloadable objects to access the cloud storage. Let's look at three different ways of consuming the Oracle Cloud services.

The first way is to use the rest API. You can consume the services by accessing the client libraries using Postman from Chrome or RESTClient from Firefox. You can also access the service from the c url command line.

c url -v -X GET -H "X-Storage-User: Storage-metcsgse00026:cloud.admin" -H "X-Storage-Pass: $OPASS" https://metcsgse00026.storage.oraclecloud.com/auth/v1.0

In this example we are connecting to the identity domain metcsgse00026. The username that we are using is cloud.admin. We store the password in an environment variable OPASS and pull in the password when we execute the c url command. On Linux or a Mac, this is done from the pre-installed c url command. On Windows we had to install cygwin-64 to get the c url command working. When we execute this c url command we get back and AUTH header that can be passed in to the cloud service to create and consume storage services. In our example above we received back X-Auth-Token: AUTH_tk928cf3e4d59ddaa1c0a02a66e8078008 which is valid for 30 minutes. The next step would be to create a storage container

c url -v -s -X PUT -H "X-Auth-Token: AUTH_tk928cf3e4d59ddaa1c0a02a66e8078008" https://storage.us2.oraclecloud.com/v1/Storage- metcsgse00026/myFirstContainer

This will create myFirstContainer and allow us to store data either with more REST api commands or tools like CloudBerry or NFS. More information about how to use the REST api services can be found in an online tutorial

The second way of accessing the storage services is through a program tool that takes file requests on Windows and translates them to REST api commands on the cloud storage. CloudBerry has an explorer that allows us to do this. The user interface looks like and is setup with the File -> Edit or New Accounts menu item. You need to fill out the access to look like . Note that the username is a combination of the identity domain (metcsgse00026) and the username (cloud.admin). We could do something similar with PostMan or RESTClient extensions to browsers. Internet Explorer does not have plug ins that allow for REST api calls.

The third, and final way to access the storage services is through NFS. Unfortunately, Windows does not offer NFS client software on desktop machines so it is a little difficult to show this as a consumable service. Mac and Linux offer these services as mounting an nfs server as a network mount. Oracle currently does not offer SMB file shares to their cloud services but it is on the roadmap in the future. We will not dive deep into the Oracle Storage Cloud Appliance in this blog because it gets a little complex with setting up a VM and installing the appliance software. The documentation for this serviceM is a good place to start.

In summary, there are a variety of ways to consume storage services from Oracle. They are typically program interfaces and not file interfaces. The service is cost advantageous when compared to purchasing spinning disks from companies like Oracle, NetApp, or EMC. Using the storage appliance gets rid of the latency issues that you typically face and difficulty in accessing data from a user perspective. Overall, this service provides higher reliability than on-premise storage, lower cost, and less administration overhead.

I Am Speaking at OTN Yathra 2016

Oracle in Action - Tue, 2016-03-22 09:19

RSS content

The Oracle ACE directors and Oracle Volunteers  in the region are organizing their third evangelist event called ‘OTNYathra 2016’ from 23rd  April 2016 to 1st May 2016.  This yathra or tour will a series of 6 conferences across 6 major cities (Chennai, Bangalore, Hyderabad,  Pune, Mumbai and Delhi) managed by ACE directors and Oracle Volunteers of the region.

I will be speaking at this year’s OTNYathra  about Oracle Database 12c new feature : Highly Available NFS (HANFS) over ACFS.

HANFS over ACFS enables highly available NFS servers to be configured using Oracle ACFS clusters. The NFS exports are exposed through Highly Available VIPs (HAVIPs), and this allows Oracle’s Clusterware agents to ensure that HAVIPs and NFS exports are always available. If the node hosting the export(s) fails, the corresponding HAVIP and hence its corresponding NFS export(s) will automatically fail over to one of the surviving nodes so that the NFS client continues to receive uninterrupted service of NFS exported paths.

My session will be held on Sunday 1st May, 2016   from 3:00pm to 3:50pm in
Room 1, BirlaSoft, H–9, Sector 63, NOIDA – 201306, NCR Delhi
Hope to meet you there!!



Tags:  

Del.icio.us
Digg

Copyright © ORACLE IN ACTION [I Am Speaking at OTN Yathra 2016], All Right Reserved. 2016.

The post I Am Speaking at OTN Yathra 2016 appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

TekTalk Webinar: 3 Immediate Use Cases for Oracle PaaS

WebCenter Team - Tue, 2016-03-22 08:09


3 Immediate Use Cases for Oracle Platform as a Service
Human Resources * Legal * Field Services
Thursday, March 24, 2016 | 1 PM EST / 10 AM PST

Oracle Cloud Platform meets the unique needs of developers, IT professionals, and business users with a comprehensive, integrated portfolio of platform services that enables them to innovate faster, increase productivity, and lower costs. Customers can use Oracle Cloud Platform to integrate existing IT with next-generation cloud services, accelerate application development and deployment, and lead business transformation.

Innovate Quickly and Confidently with Platform as a Service.

Platform as a Service solutions can help you:
  • Accelerate business innovation
  • Improve business agility and insight
  • Reduce IT cost and complexity
  • Increase productivity and collaboration
During this 30 minute webinar, Troy Allen will provide a demonstration of how TekStream Solutions uses Oracle Cloud Platform to address specific business needs for modern organizations. We'll also present specific use cases for Human Resources, Legal and Field Services teams. Register today!

Unable to logon to BPM Workspace

Darwin IT - Tue, 2016-03-22 04:30
Yesterday I tried to test a demo bpm process with a few tasks. But I couldn't logon to the workspace. I couln't find an error, except for:

<[ServletContext@452818297[app:OracleBPMWorkspace module:/bpm/workspace path:null spec-version:3.1]] Servlet failed with an Exception java.lang.IllegalStateException: Response already committed

I tried several optional solutions, like setting the listen-address, that did not work. What solved the issue was setting the ServerURL in the System MBean browser of the soa-infra.
To do so in BPM12cR2, go to the Enterprise Manager, (eg. http://darlin-vce-db:7001/em as it is in my VM) and go to the soa-infra:
Then in the SOA Infrastructure -> Administration menu open the System MBean Browser:
In the System MBean Browser look for the Application Defined MBeans and expand it:
Within the Application Defined MBeans, expand 'oracle.as.soainfra.config', then your server  and then 'SoaInfraConfig' and click on soa-infra:
Find the attribute ServerURL and edit it to the host:port of your soa-server, including the 'http://' protocol, eg. 'http://darlin-vce-db:7005':

Don't forget to hit the enter key and click on Apply:


Restart your server and it should be good to go.

GDC 2016 – Part 1: Event and Impression

Oracle AppsLab - Tue, 2016-03-22 03:34

Tawny (@iheartthannie) and I attended the 30th Edition of GDC – Game Developers Conference. As shown in the Tawny’s daily posts, there were lots of fun events, engaging demos, and interesting sessions, that we simply could not cover them all. With 10 to 30 sessions going on at any time slots, I wished to have multiple “virtual mes” to attend some of them simultaneously. However, with only one “real me,” I still managed to attend a large number of sessions, mostly 30-minute sessions to cover more topics at a faster pace.

Game Developers Conference 2016

GDC 2016

Unlike Tawny’s posts that give you in-depth looks into many of the sessions, I will try to summarize the information and take-aways in two posts: Part 1 – Event and Impression; Part 2 – The State of VR. This post will cover event overview and general impression.

1. Flash Backward

Flash Backward - 30 Years of Making Games

Flash Backward – 30 Years of Making Games

After two days of VR sessions, this flashback kicked off the GDC Game portion with a sense of nostalgia, flashing games like Pac-Man and Minesweeper, evolving into console games, massive multi-player games, social games (FarmVille), mobile games (Angry Birds), and onto VR games.

GDC has been running for 30 years, and many of the attendants were not even born yet that time. The Flashback started with Chris Crawford, the founder of GDC, and concluded with Palmer Luckey, the Oculus dude, who is 23, with not much for flashback, but only looking forward to the new generation of games in VR. He will be back in 20 years for the retrospective

Video : The MERGE Statement

Tim Hall - Tue, 2016-03-22 02:28

After a what seems like an eternity of being ill and having a dodgy throat, followed quickly by a couple of conferences, I’ve finally got back on the horse and recorded another video.

I was explaining a specific aspect of the MERGE statement to one of my colleagues and while I was doing it I was thinking, “Have I done a video on MERGE yet?” Now I have.

The cameo for this video is Cary Millsap. If you watch the out-takes at the end you will see the level of respect and trust I have garnered in the community. The words confused and suspicious spring to mind! :)

An honourable mention goes out to James Morle for videobombing. :)

Cheers

Tim…

Video : The MERGE Statement was first posted on March 22, 2016 at 8:28 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

IBM Bluemix NodeRed Demo using Twitter and Telstra SMS API

Pas Apicella - Mon, 2016-03-21 21:48
In this example I integrate Twitter Feed with Telstra SMS API to send an SMS based. It is based on the wrapper application which exposes the Telstra SMS API on Bluemix as per a previous Post.

http://theblasfrompas.blogspot.com.au/2015/08/integrating-telstra-public-sms-api-into.html

It is assumed you have NodeRed NodeJS application running on Bluemix and are at the Editor as shown below.



Steps

1. Drag a "Social -> Twitter" node onto the editor
2. Double click on the Node and ensure you add your twitter credentials and authorize twitter to work with NodeRed itself. Also use a FOR TAG you wish to receive as part of the feed from twitter, in this demo it is "#telstrasmaapi-pas"



3. Once done it will look as follows



4. Drag a "Function -> HTTP Request" onto the editor
5. Double click on the HTTP Request item and add details as shown below.

Method = POST
URL = http://pas-telstrasmsapi.mybluemix.net/telstrasms?to=0411151350&body=tweet sent about telstra SMS API
Name = Any name of your choice

Note: Ensure URL is changed to the mobile number you wish to use and a BODY you wish to send as part of the TEXT.


6. Connect the twitter node to the HTTP Request node as shown below.



7. Click the "Deploy" button
8. Now log into your Twitter account and send a Tweet using the TAG you identified above as shown below. You must use the TAG name you said your looking for in this case "#telstrasmaapi-pas"


9. It should then send an SMS to the identified mobile number you used (Australia Mobiles Only) as shown below.



More Information

For more information on NodeRed use the link below.

http://nodered.org/



Categories: Fusion Middleware

Calling REST Services from Application Builder Cloud Service

Shay Shmeltzer - Mon, 2016-03-21 16:33

One of the frequent requests we get when we demo ABCS is - can I invoke some external functionality that is exposed as a REST service and pass parameters to it.

Well, with a minimal amount of JavaScript coding you can do it in the current version. 

I recorded the demo below that shows you how to do that.

I'm leveraging a public REST API that github exposes to get a list of repositories for a user. The service is available at https://api.github.com/users/oracle/repos

I then design an ABCS page that has a parameter field, a button that invokes the REST/JSON call, and a placeholder for results. It looks like this: 

In addition the video also shows some other techniques that are useful, including:

  • How to create a new blank data entry page
  • How to add custom component that renders HTML content
  • How to add a button that calls a REST service
  • How to pass a parameter to the JavaScript custom code
  • How to set a page to be the default page of the app
  • How to stage your application for external testing

&amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;quot;&amp;amp;amp;amp;gt;&amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;gt;

It seems that right now you are restricted to accessing REST services that are secured over HTTPS protocol (which is a good thing).

Note that you of course don't have to stage the app to see it run, you can just go into live mode, or run it to see it working. I just wanted to make sure I have a demo out there that shows how staging works.

The JavaScript snippet I'm using in the video is: 

$.getJSON("https://api.github.com/users/"+ +"/repos", function(result){

$.each(result, function(i, field){

$('[name="myOutput"]').append(field.name + " ");

});

        });

resolve(); 

If you'll actually add a

$('[name="results"]').empty(); 

as the first link, it will clear the field for you each time you re-press the button. 

Categories: Development

New Content on Our Oracle.com Page

Oracle AppsLab - Mon, 2016-03-21 16:18

Back in September, our little team got a big boost when we launched official content under the official Oracle.com banner.

I’ve been doing this job for various different organizations at Oracle for nine years now, and we’ve always existed on the fringe. So, having our own home for content within the Oracle.com world is a major deal, further underlining Oracle’s increased investment in and emphasis on innovation.

Today, I’m excited to launch new content in that space, which, for the record is here:

www.oracle.com/webfolder/ux/applications/successStories/emergingTech.html

We have a friendly, short URL too:

tinyurl.com/appslab

The new content focuses on the methodologies we use for research, design and development. So you can read about why we investigate emerging technologies and the strategy we employ, and then find out how we go about executing that strategy, which can be difficult for emerging technologies.

Sometimes, there are no users yet, making standard research tacits a challenge. Equally challenging is designing an experience from scratch for those non-existent users. And finally, building something quickly requires agility, lots of iterations and practice.

All-in-all, I’m very happy with the content, and I hope you find it interesting.

Not randomly, here are pictures of Noel (@noelportugal) showing the Smart Office in Australia last month.

RS3660_ORACLE 332

RS3652_ORACLE 419

The IoT Smart Office, just happens to be the first project we undertook as an expanded team in late 2014, and we’re all very pleased with the results of our blended, research, design and development team.

I hope you agree.

Big thanks to the writers, Ben, John, Julia, Mark (@mvilroxk) and Thao (@thaobnguyen) and to Kathy (@klbmiedema) and Sarahi (@sarahimireles) for editing and posting the content.

In the coming months, we’ll be adding more content to that space so stay tuned.Possibly Related Posts:

Apache Cassandra 2.1 Incremental Repair

Pythian Group - Mon, 2016-03-21 15:05

The “incremental repair” feature has been around since Cassandra’s 2.1. Conceptually the idea behind incremental repair is straightforward, but it can get complicated. The official Datastax document describes the procedure for migrating to incremental repair, but in my opinion, it doesn’t give a full picture. This post aims to fill in this gap by summarizing and consolidating the information of Cassandra incremental repair.

Note: this post assumes the reader has a basic understanding of Apache Cassandra, especially the “repair” concept within Cassandra.

 

1. Introduction

The idea of incremental repair is to mark SSTables that are already repaired with a flag (a timestamp called repairedAt indicating when it was repaired) and when the next run of repair operation begins, only previously unrepaired SSTables are scanned for repair. The goal of an “incremental repair” is two-fold:

1) It aims to reduce the big expense that is involved in a repair operation that sets out to calculate the “merkle tree” on all SSTables of a node;

2) It also makes repair network efficient because only rows that are marked as “inconsistent” will be sent across the network.

2. Impact on Compaction

“Incremental repair” relies on an operation called anticompaction to fulfill its purpose. Basically, anticompaction means splitting an SSTable into two: one contains repaired data and the other contains non-repaired data. With the separation of the two sets of SSTables, the compaction strategy used by Cassandra also needs to be adjusted accordingly. This is because we cannot merge/compact a repaired SSTable with an unrepaired SSTable together. Otherwise, we lose the repaired states.

Please note that when an SSTable is fully covered by a repaired range, no anticompaction will occur. It will just rewrite the repairedAt field in SSTable metadata.

SizeTiered compaction strategy takes a simple strategy. Size-Tiered compaction is executed independently on the two sets of SSTables (repaired and unrepaired), as the result of incremental repair Anticompaction operation.

For Leveled compaction strategy, leveled compaction is executed as usual on repaired set of SSTables, but for unrepaired set of SSTables, SizeTiered compaction will be executed.

For DateTiered compaction strategy, “incremental repair” should NOT be used.

3. Migrating to Incremental Repair

By default, “nodetool repair” of Cassandra 2.1 does a full, sequential repair. We can use “nodetool repair” with “-inc” option to enable incremental repair.

For Leveled compaction strategy, incremental repair actually changes the compaction strategy to SizeTiered compaction strategy for unrepaired SSTables. If a nodetool repair is executed for the first time on Leveled compaction strategy, it will do SizeTiered compaction on all SSTables because until the first incremental repair is done, Cassandra doesn’t know the repaired states. This is a very expensive operation and it is therefore recommended to migrate to incremental repair one node at a time, and follow the following procedure to migrate to incremental repair:

  1. Disable compaction on the node using nodetool disableautocompaction
  2. Run the default full, sequential repair.
  3. Stop the node.
  4. Use the tool sstablerepairedset to mark all the SSTables that were created before you disabled compaction.
  5. Restart cassandra
3.1 Tools for managing SSTable repaired/unrepaired state

Cassandra offers two utilities for SSTable repaired/unrepaired state management:

  • sstablemetadata is used to check repaired/unrepaired state of an SSTable. The syntax is as below:

             sstablemetadata <sstable filenames>

  • sstablerepairedset is used to manually mark if an SSTable is repaired or unrepaired. The syntax is as below. Note that this tool has to be used when Cassandra is stopped.

             sstablerepairedset [–is-repaired | –is-unrepaired] [-f <sstable-list> | <sstables>]

Please note that with utility sstablerepairedset, you can also stop incremental repair on Leveled compaction and restore the data to be leveled again with the “—is-unrepaired” option. Similarly, the node needs to be stopped first.

4. Other Considerations with Incremental Repair

There are some other things to consider when using incremental repair.

  • For Leveled compaction, once an incremental repair is used, it should be done so continuously. Otherwise, only SizeTiered compaction will be executed. It is recommended to run incremental repair daily and run full repairs weekly to monthly.
  • Recovering from missing data or corrupted SSTables require a non-incremental full repair.
  • “nodetool repair” –local option should be only used with full repair, not with incremental repair.
  • In C* 2.1, sequential repair and incremental repair does NOT work together.
  • With SSTable’s repaired states being tracked via it’s metadata, some Cassandra tools can impact the repaired states:
    1. Bulk loading will make loaded SSTables unrepaired, even if was repaired in a different cluster.
    2. If scrubbing causes dropped rows, new SSTables will be marked as unrepaired. Otherwise, SSTables will keep their original repaired state.
Categories: DBA Blogs

accessing cloud storage

Pat Shuff - Mon, 2016-03-21 15:02
Oracle cloud storage is not the first product that performs basic block storage in the cloud. The name is a little confusing as well. When you think of cloud storage, the first thing that you think of is dropbox, box.com, google docs, or some other file storage service. Oracle Cloud Storage is a different kind of storage. This storage is more like Amazon S3 storage and less like file storage in that it provides the storage foundation for other services like compute, backup, or database. If you are looking for file storage you need to look Document Cloud Storage Services which is more tied to processes and less tied to raw cloud storage. In this blog we will look at different ways of attaching to block storage in the cloud and look at the different ways of creating and consuming services. To start off with, there are two ways to consume storage in the Oracle Cloud, metered and un-metered. Metered is charged on a per-hourly/monthly basis and you pay for what you consume. If you plan on starting with 1 TB and growing to 120 TB over a 12 month period, you will pay on average for 60 TB over the year. If you consume this same service as an un-metered service you will pay for 120 TB of storage for 12 months since you eventually cross the 1 TB boundary some time over the year. With the metered services you also pay for the data that you pull back across the internet to your computer or data center but not the initial load of data to the Oracle Cloud. This differs from Amazon and other cloud services that charge both for upload and download of data. If you consume the resources in the Oracle Cloud by other cloud services like compute or database in the same data center, there is no charge for reading the data from the cloud storage. For example, if I use a backup software package to copy operating system or database backups to the Oracle Cloud Storage and restore these services into compute servers in the Oracle Cloud, there is no charge for restoring the data to the compute or database servers.

To calculate the cost of cloud storage from Oracle, look at the pricing information on the cloud web page. for metered pricing and for un-metered pricing.

If we do a quick calculation of the pricing for our example previously where we start with 1 TB and grow to 120 TB over a year we can see the price difference between the two solutions but also note how much reading back will eventually cost. This is something that Amazon hides when you purchase their services because you get charged for the upload and the download. for un-metered pricing and for metered pricing. Looking at this example we see that 120 TB of storage will cost us $43K per year with un-metered services but $36K per year for metered services assuming a 20% reading of the data once it is uploaded. If the read back number doubles, so does the cost and the price jumps to $50K. If we compare this cost to a $3K-$4K/TB cost of on-site storage, we are looking at $360K-$480K plus $40K-$50K in annual maintenance. It turns out it is significantly cheaper to grow storage into the cloud rather than purchasing a rack of disks and running them in your own data center.

The second way to consume storage cloud services is by using tape in the cloud rather than spinning disk in the cloud. Spinning disk on average costs $30/TB/month whereas tape averages $1/TB/month. Tape is not offered in an un-metered service so you do need to look at how much you read back because there is a charge of $5/TB to read the data back. This compares to $7/TB/month with Amazon plus the $5/TB upload and download charges.

Pythian at Collaborate 16

Pythian Group - Mon, 2016-03-21 14:27

Collaborate is a conference for Oracle power users and IT leaders to discuss and find solutions and strategies based on Oracle technologies. This many Oracle experts in one place only happens one per year, and Pythian is excited to be attending. If you are attending this year, make sure to register for some of the sessions featuring Pythian’s speakers, listed below.

Collaborate 16 is on April 10-14, 2016 at the Mandalay Bay Resort and Casino in Las Vegas, Nevada, US.

 

Pythian Collaborate 16 Speaker List:

 

Michael Abbey | Consulting Manager | Oracle ACE

Communications – the Good, the Bad, and the Best

Tues April 12 | 9:15 a.m. – 10:15 a.m. | North Convention, Room South Pacific D

Traditional DB to PDB: The Options

Tues April 12 | 2:15 p.m. – 3:15 p.m. | Room Jasmine A

Documentation – A Love/Hate Relationship (For Now)

Wed April 13 | 8:00 a.m. – 9:00 a.m. | Room Palm A

 

Nelson Caleroa | Database Consultant | Oracle ACE

Exadata Maintenance Tasks 101

Tues April 12 | 10:45 a.m. – 11:45 am | Room Palm C

Evolution of Performance Management: Oracle 12c Adaptive Optimization

Tues April 12 | 3:30 p.m. – 4:30 p.m | Room Jasmine A

 

Subhajit Das Chaudhuri | Team Manager

Deep Dive Into SSL Implementation Scenarios for Oracle Application E-Business Suite

Wed April 13 | 8:00 a.m. – 9:00 a.m. | Room Breakers E

 

Alex Gorbachev | CTO | Oracle ACE Director

Oaktable World: TED Talks

Wed April 13 | 12:00 p.m. – 12:30 p.m. | Room Mandalay Bay Ballroom

Oaktable World: Back of a Napkin Guide to Oracle Database in the Cloud

Wed April 13 | 4:15 p.m. – 5:15 p.m. | Room Mandalay Bay Ballroom

 

Gleb Otochkin | Principal Consultant

Two towers or story about data migration. Story about moving data and upgrading databases.

Mon April 11 | 4:30 p.m. – 5:30 p.m. | Room Jasmine A

 

Simon Pane | ATCG Principal Consultant | Oracle Certified Expert

Oracle Database Security: Top 10 Things You Could & Should Be Doing Differently

Mon April 11 | 2 p.m. – 3 p.m. | Room Palm A

Time to get Scheduling: Modernizing your DBA scripts with the Oracle Scheduler (goodbye CRON)

Tues April 12 | 10:45 a.m. – 11:45 a.m. | Room Palm A

 

Roopesh Ramklass | Principal Consultant

Oracle Certification Master Exam Prep Workshop

Sun April 10 | 9:00 a.m. – 3:00 p.m. | Room Jasmine C

Fast Track Your Oracle Database 12c Certification

Wed April 13 | 8:00 a.m. – 9:00 a.m. | Room Jasmine A

 

Categories: DBA Blogs

Next Five New Features in Oracle Database 12c for DBAs : Part II

Online Apps DBA - Mon, 2016-03-21 14:20

 This post is series of Oracle Database 12c new features, check out our previous post on Five New Features in Oracle Database 12c for DBAs : Part I here The Oracle 12C means different things to different people. It all depends on which areas you are looking at, as there are improvements in many areas. Summarized […]

The post Next Five New Features in Oracle Database 12c for DBAs : Part II appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle BPM 11g: Mapping Empty Elements

Jan Kettenis - Mon, 2016-03-21 13:39
In this blog article I explain what happens with mappings for which the source is empty, and you map it to an optional or mandatory element. The scenarios described in this article are based on SOA / BPEL 11g. In some next article I will describe what happens when you do the same in SOA 12c (which is not the same).

Let's assume we have a data structure like this:


And let's assume we have a BPEL that takes a message of the above type as input, and - using a couple of different scenarios - maps it to another element of the same type as output.

The table below shows what happens when you map empty data to a mandatory or optional element (i.e. minOccurs="0"), taking payload validation into consideration, as well as making use of the "ignoreMissingFromData" and "insertMissingToData" features of XPath mappings (only available in BPEL and not in BPM). In the below "null" means that the element is not there at all, "empty" means that the element is there but has no value. As you can see from the XSD an emtpy value is nowhere allowed (otherwise it should have an attribute xsi:nill with value "true").



As you can see, disabling payload validation will lead to corrupt data. But even with payload validation on you may get a result that might not be valid in the context of usage, like an empty mandatory or optional element. Unless empty is a valid value, you should make sure that optional elements are not there when they have no value.

To set "ignoreMissingFrom" and "insertMissingToData", right-mouse click the mapping and toggle the values:


When using the "ignoreMissingFromData" feature with a null optional element mapped to itself, the result is as on the left below. When also the "insertMissingToData" feature is used, the result is as on the right:


Mind that the "insertMissingToData" feature also leads to namespace prefixes for each element.

NEW: Sales Collateral on Social Customer Service & the Impact on CX

Linda Fishman Hoyle - Mon, 2016-03-21 12:58

A Guest Post by Amy Sorrells (pictured left), Oracle Product Management

Traditional marketing is trying to get people to notice and engage with your brand. Customer service is engaging with someone who is already invested in your brand. According to various sources, it is anywhere from five to 20 times more expensive to attract a new customer than to keep an existing one satisfied.

Today, social customer service is an absolute expectation from consumers and is playing a major role in the customer experience. The impact social service has on business is more than just resolving the issue—it’s driving brand awareness, loyalty, and business value.

Oracle Social Cloud certainly understands the importance of social service, which we underscore in our newly released paper, “Social’s Shift to Service: Why Customer Service Engagement is the New Marketing.” The paper states the business benefits of social customer service with insights from customers like Cummins Inc., General Motors, Southwest Airlines, Vodafone, Mothercare and more, as well as insights from leading analysts.

The most important takeaway: Social service is more than just resolving issues—it is driving brand reputation, loyalty and real business value:

“Customer service is not just about resolving issues; it’s about inspiring customer loyalty and engagement, and uncovering new insights. The hidden opportunity here is to identify problems or defects ahead of time—find insights that allow us to take ‘customer service’ to an entirely new level—learn, engage, empower, inspire and delight.” – Flavio Mello, Cummins Inc., Digital Communications Director

“The benefit for us was not only selling the vehicle, but also getting two million views of BatDad’s test drive and his impressions of the sales experience.” – Rebecca Harris, General Motors, Head of Social Center of Expertise, on a successful social engagement with a consumer named @BatDad

Better social service leads to overall better customer service and increased profits. A recent global American Express report shows 74% of consumers have spent more due to good customer service. A recent McKinsey study stated that companies that improve their customer service can see a 30-50% improvement in key measurements including likelihood to recommend and make repeat purchases.

“For word of mouth and referrals, social is a really important way that customers use to filter down to what’s important, so social it absolutely critical to us… We can say a product is great, but if a real customer who has bought our stuff says it’s great, there’s a lot more sincerity to that.” – Claire Dormer, Mothercare, Head of Content & Community

There is enormous untapped potential around customer service engagements—powered by social listening—from uncovering new customer and industry insights to creating brand advocates.

“Social media is the world’s biggest focus group… social is opening up a new area for communication so we’re getting lots of comments and feedback from customers, which means we can very quickly feed those insights into key areas of the business… and better understand our customers and their needs.” – Richard Bassinder, YBS, Social Media Manager

Social media listening, engagements and insights are critical to most every aspect of business today. But the role customer service plays with social will increase drastically as mobile-social usage continues to soar and consumers’ expectations rise. Social’s shift to service is a sign of our times and a customer experience priority for all businesses.

Tools vs Products

Floyd Teter - Mon, 2016-03-21 12:17
I have a garage full of neat tools.  Drill press, miter saw, band saw, table saw, power sander, Dremel, several Milwaukee power drills and portable hand saws, gauges, clamps, vise grips...yeah, the works. But I've learned something over the years; other than other people with a shared interest in nifty tools, nobody cares about the tools I have.  What they care about is the speed, quality and cost involved in making things with those tools.  I can own the niftiest hammer on the face of the planet, but few people will care; they care about the house I build, regardless of the coolness of the hammer.

This concept is not limited to traditional shop and construction tools.  Pull out your smartphone.  Take a look at the apps.  Nobody cares about what tools were used to build the app if it misses the mark on quality, speed, ease of use, or cost.

The same holds true for SaaS applications.  Customers don't care about the underlying platform...nor should they, when the idea is to make all that complexity transparent to them.  Customers care about care about speed, ease of implementation and use, quality (including reliability, depth of features and security), how well the application will perform their business process, and the information the application will provide about those executed transactions.

So, to put it bluntly, SaaS is not about the platform nor the development tools.  It's about ease of use, quality, and cost.  Let's stop talking about the technology and start talking about the things that matter.

REST API Now Supports Metadata in DOCS

WebCenter Team - Mon, 2016-03-21 11:21

Author: Victor Owuor, Senior Director, Oracle

It is our goal that Oracle Documents Cloud Service (DOCS) should be a platform for easily building cloud applications.   To make that possible, we provide a framework for embedding the DOCS user experience within your application.   We also offer a REST API for making calls to DOCS, allowing you to surface its capability within your user interface.  We are proud to announce that the REST API now supports metadata.

We will describe metadata with reference to an application for managing assets for a real estate listing site.   The application will manage the relevant assets within DOCS and surface those assets in UI that it will render separately.   As you would imagine, the application will need to store various images of the properties listed in the application.   For example, there may be a front picture and pictures of various rooms.  Additionally, there will be a need to track additional descriptive information about the assets.  It is that additional descriptive information that we refer to as metadata.  

The application might need to track an address for each property.  The address comprises a collection of:
  • A street address
  • A city
  • A state
  • A country
Each of those is referred to as a metadata field in DOCS.  The related fields are grouped in a metadata collection.  The application will define multiple collections and each folder or document could be associated with several collections.  For example, in additional to the address collection, properties on sale might also be associated with a forsale collection, including the following fields:
  • A sale price
  • Property taxes
  • Previous sale prices
In contrast, properties for a rental property would instead be associated with a for-rent collection, including the following fields:
  • Rental price
  • Lease term
DOCS allows an administrator to easily define metadata collections and the fields in it.   In the example above, an administrator would define the address collection as follows:
POST …/metadata/Address?fields=Street,City,State

He can trivially alter the address collection to include a country as follows:

PUT …/metadata/Address?addFields=Country

Once a collection and its fields are defined, any user can assign it to a folder or a document.  The calls for doing so are as follows:

POST …/folders/{folder id}/metadata/Address
POST …/files/{file id}/metadata/Address

 Of course, only users with contributor access to the folder or document may assign a collection.

Having assigned the collection, users may set values for the various fields in the collection as follows: 

POST …/folders/{folder id}/metadata?collection=Address&Zip=55347&City=Minneapolis
POST …/files/{file id}/metadata?collection=Address&Zip=55347&City=Minneapolis

Collections and values assigned to a folder are inherited by both its sub-folders and any documents within it.  The inherited value can be overridden by assigning a specific value for the metadata field to an item.

 All of the metadata properties would be of little value if you could not retrieve metadata values previously assigned to a document.   We allow you to do that in a simple call that is formatted as follows:

GET …/folders/{folder id}/metadata
GET …/files/{file id}/metadata

That call returns the metadata values in a JSON object that is contained within the HTTP response.  A sample response is shown below:

Additional information about the metadata feature is available here.


Oracle Support Accreditation

Joshua Solomin - Mon, 2016-03-21 11:05
Be More Productive with My Oracle Support

Looking for best practices to take your support experience to the next level?

If you frequently use My Oracle Support (MOS), My Oracle Support Communities, or Cloud Support portal for knowledge search and managing service requests, take advantage of the Oracle Support Accreditation learning program.

Over 19,000 users have already completed an accreditation learning path and exam to build a personalized support toolkit for their role. All accreditation learning resources and exam material are already included in your support contract.

Oracle Support Accreditation includes:

  • 14 learning paths to increase your expertise and efficiency in completing support activities.
  • Level 1 accreditation highlights core features of the portal applications and provides recommendations to increase productivity.
  • Level 2 focuses on individual software products to demonstrate best practices for your specific applications.

Learn key concepts about diagnostics, patching, and finding product information more easily and effectively, with tips for implementing them into your daily support activities based on feedback from Oracle product experts. When finished your personalized resource toolkit will help increase your productivity and streamline your support activities.

Is today your day to become an Oracle Support Accredited User?

Learn more: Oracle Support Accreditation Series Index, Document 1583898.1
Join the conversation in the My Oracle Support Community: Oracle Support Accreditation.

Enterprise Manager 13c And Database Backup Cloud Service

Fuad Arshad - Mon, 2016-03-21 10:35

The Oracle Database Cloud Service allows for backup of an Oracle Database to the Oracle Cloud using Rman. Enterprise Manager 13c provides a very easy way to configure Oracle Database Backup Cloud Service. This post will walk you thru setup of the Oracle Database Backup Cloud service as well as running backups from EM.


There is a new menu Item to configure the Database Backup Cloud Service (DBCS) in the Backup & Recovery Drop down.


This will show you how to setup the Database Backup Cloud Service. If nothing was configured before you will see the screenshot .

Once you click on the Configure Database Backup Cloud Service you will be asked for the Service (Storage) and the Identity Domain that you want the Backups to go to . This Identity Domain comes as part of the DBCS or as Part of DBaaS that can be purchased from Oracle Cloud


Once the Settings are saved . A popup will confirm that the setting have been saved.


After Saving the Settings Submit the Configuration Job . This will Download the Oracle Backup Module to the hosts as well as configure the Media Management Settings. The Job will provide details and confirm all configuration is complete, and will configure this on all nodes of a RAC which can save a lot of time.

We have now completed the setup and can validate by looking the Configure Cloud Backup Setup . This also has an option to test cloud backup as well.

. Lets ensure we have settings there and Checking in Backup Settings , The Media Management settings will shows the location of the Library , Environment and Wallet. The Database Backup Cloud Service requires all backups sent to it is encrypted.


You can also validate this by connecting to rman on the command line and running a "SHOW ALL"

As you can see we have confirmed that the media management setup is completed and well as run a job to download the Cloud Backup Module and configure it.
Now as a final Step we will configure a backup and run an Rman Backup to the Cloud. In the Backup and Recovery Menu Schedule a Backup . Fill out the pertinent setting and make sure ou either encrypt via a password or a wallet or both. The backup that i scheduled was encrypted using a password.

On the Second Page Select the Destination which is the Cloud in our case. and Schedule it


Validate that the setting are right and execute the Job. You can monitor the job by clicking the View Job. The New Job Interface in EM13c is really nice and allows you to see a graphical representation of execution time as well as a log of what is happening side by Side like below.

Once the Backup is completed you can not only see the backup thru EM but also using RMAN on the command line

There are a couple of things that i didn’t show during the process . Parallelism during a backups is important as is compression.
Enterprise Manager 13c allows for making the already simple process of setting up Backup’s to the Database Cloud Service much easier.

Pages

Subscribe to Oracle FAQ aggregator