Skip navigation.

Feed aggregator

Selfies. Social. And Style: Smartwatch UX Trends

Oracle AppsLab - Tue, 2016-01-05 02:04

From Antiques to Apple

“I don’t own a watch myself,” a great parting shot by Kevin of Timepiece Antique Clocks in the Liberties, Dublin.

I had popped in one rainy day in November to discover more about clock making and to get an old school perspective on smartwatches. Kevin’s comment made sense. “Why would he need to own a watch?” I asked myself, surrounded by so many wonderful clocks from across the ages, all keeping perfect time.

This made me consider what might influence people to use smartwatches? Such devices offer more than just telling the time.

 UX research in the Liberties, Dublin

From antiques to Apple: UX research in the Liberties, Dublin

2015 was very much the year of the smartwatch. The arrival of the Apple Watch earlier in 2015 sparked much press excitement and Usable Apps covered the enterprise user experience (UX) angle with two much-read blog pieces featuring our Group Vice President, Jeremy Ashley (@jrwashley).

Although the Apple Watch retains that initial consumer excitement (at the last count about 7 million units have shipped), we need to bear in mind that the Oracle Applications User Experience cloud strategy is not about one device. The Glance UX framework runs just as well on Pebble and Android Wear devices, for example.

vectorpeak_plane-1

It’s not all about the face. Two exciting devices came my way in 2015 for evaluation against the cloud user experience: The Basis (left) and Vector Watch.

Overall, the interest in wearable tech and what it can do for the enterprise is stronger than ever. Here’s my (non-Oracle endorsed) take on what’s going to be hot and why in 2016 for smartwatch UX.

Trending Beyond Trendy

There were two devices that came my way in 2015 for evaluation that for me captured happening trends in smartwatch user experience.

First there was the Basis Peak (now just Basis). I covered elsewhere my travails in setting up the Basis and how my perseverance eventually paid off.

basis

Basis: The ultimate fitness and sleep tracker. Quantified self heaven for those non-fans of Microsoft Excel and notebooks. Looks great too!

Not only does the Basis look good, but its fitness functionality, range of activity and sleep monitoring “habits,” data gathering, and visualizations matched and thrilled my busy work/life balance. Over the year, the Basis added new features that reflected a more personal quantified self angle (urging users to take a “selfie”) and then acknowledged that fitness fans might be social creatures (or at least in need of friends) by prompting them to share their achievements, or “bragging rights,” to put it the modern way.

peaknotifications_25

Your bragging rights are about to peak: Notifications on Basis (middle).

Second there was the Vector Watch, which came to me by way of a visit to Oracle EPC in Bucharest. I was given a device to evaluate.

A British design, with development and product operations in Bucharest and Palo Alto too, the Vector looks awesome. The sophisticated, stylish appearance of the watch screams class and quality. It is easily worn by the most fashionable people around and yet packs a mighty user experience.

 Fit executive meets fashion.

Vector Watch: Fit executive meets fashion.

I simply love the sleek, subtle, How To Spend It positioning, the range of customized watch faces, notifications integration, activity monitoring capability, and the analytics of the mobile app that it connects with via Bluetooth. Having to charge the watch battery only 12 times (or fewer) each year means one less strand to deal with in my traveling Kabelsalat.

The Vector Watch affordance for notifications is a little quirky, and sure it’s not the Garmin or Suunto that official race pacers or the hardcore fitness types will rely on, and maybe the watch itself could be a little slimmer. But it’s an emerging story, and overall this is the kind of device for me, attracting positive comments from admirers (of the watch, not me) worldwide, from San Francisco to Florence, mostly on its classy looks alone.

I’m so there with the whole #fitexecutive thing.

Perhaps the Vector Watch exposes that qualitative self to match the quantified self needs of our well-being that the Basis delivers on. Regardless, the Vector Watch tells us that wearable tech is coming of age in the fashion sense. Wearable tech has to. These are deeply personal devices, and as such, continue the evolution of wristwatches looking good and functioning well while matching the user’s world and responding to what’s hot in fashion.

Heck, we are now even seeing the re-emergence of pocket watches as tailoring adapts and facilitates their use. Tech innovation keeps time and keeps up, too, and so we have Kickstarter wearabletech solutions for pocket watches appearing, designed for the Apple Watch.

The Three “Fs”

Form and function is a mix that doesn’t always quite gel. Sometimes compromises must be made trying to make great-looking, yet useful, personal technology. Such decisions can shape product adoption. The history of watch making tells us that.

Whereas the “F” of the smartwatch era of 2014–2015 was “Fitness,” it’s now apparent that the “F” that UX pros need to empathize with in 2016 will be “Fashion.” Fashionable technology (#fashtech) in the cloud, the device’s overall style and emotional pull, will be as powerful a driver of adoption as the mere outer form and the inner functionality of the watch.

The Beauty of Our UX Strategy

The Oracle Applications Cloud UX strategy—device neutral that it is—is aware of such trends, ahead of them even.

The design and delivery of beautiful things has always been at the heart of Jeremy Ashley’s group. Watching people use those beautiful things in a satisfied way and hearing them talk passionately about them is a story that every enterprise UX designer and developer wants the bragging rights to.

So, what will we see on the runway from Usable Apps in 2016 in this regard?

Stay tuned, fashtechistas!

Editor’s note: Cross-posted from Usableapps (@usableapps), thanks to our old mate Ultan (@ultan), a guy who knows both fashion and tech.Possibly Related Posts:

What Blackboard’s New CEO Needs to Do Now (and how you can tell if he’s doing it)

Michael Feldstein - Mon, 2016-01-04 12:11

By Michael FeldsteinMore Posts (1052)

As Phil noted in his post, Blackboard has hired a new CEO, a guy by the name of Bill Ballhaus. We don’t know much about him yet, other than that he came from outside education. (That shouldn’t be considered a disqualifier, by the way. Instructure CEO Josh Coates also came from outside education, for example, and he has kept most of his customers very happy so far.) We’ll learn more about him over the next days, weeks, and months. In the meantime, it’s worth taking some time to consider the challenge he has in front of him.

Still Under Private Equity, But Different Conditions

Before we get into the steps that management has to make, it’s worth taking a moment to remind and update ourselves regarding the conditions under which they operate. Blackboard is owned by Providence Equity Partners, a private equity (PE) company. As I wrote back when they were acquired, there are two basic business strategies for PE—landlords (usually slumlords) or house flippers. Sometimes PE buys a property that generates a lot of cash and just does the minimum to keep the money coming in. Product generally doesn’t improve much and, in fact, often slowly deteriorates. This is the landlord scenario. Other times, PE buys a property that they think has obvious problems that can be fixed in a few years. They make very careful, targeted investments with the goal of selling the property for a lot more than they paid for it a few years down the road. This is the house flipper scenario. There is a third model that I didn’t mention in the original post, which is junkyard. In this model, PE thinks that the parts of the company are worth more than the whole. They buy it cheap, fleece it for parts, and junk the pieces that don’t sell.

One important consideration for all of these models is that the purchase of the company is almost always made with a lot of debt financing. Debt financing means interest payments. And those interest payments go to the bottom line of the companies being purchased. So, for example, even if Providence was interested in flipping Blackboard, any investments they make into fixing up the house will be bounded by those interest payments. Basically, the CEO of the company is given the equivalent of a second mortgage and told to use the money to get the property into selling shape. In retrospect, it is clear that Providence tried and failed to use the house flipping strategy with Blackboard in a three-year time frame. (More accurately, Jay Bhatt tried and failed on Providence’s behalf.)

So what now?

We don’t know much about the new guy yet, but one thing we do know is that he has successfully flipped a company for Providence before. That suggests that the company may not have given up on the flipping strategy yet. Also, Ballhaus’ title as listed on the web site is “chairman, president, and chief executive officer.” I may be mistaken, but I believe that having Blackboard’s CEO also be Chairman of the Board is new. It is also relatively unusual (though not unheard of) with PE-owned companies. It suggests that they are giving the new guy more control and independence than Bhatt had. It also suggests that Providence is in a relatively weak negotiating position with the new guy. Chances are good that he has been given another three to five years of runway to turn the company around for a flip. Chances are also good that he is in a position to loosen the purse strings a little more than his successor was.

But that doesn’t mean that his job will be easy.

An Uphill Climb

Ballhaus inherits a company with a number of problems. Their customers are increasingly unhappy with the support they are getting on the current platform, unclear about how they will be affected by future development plans, and unconvinced that Blackboard will deliver a next-generation product in the near future that will be a compelling alternative to the competitors in the market. Schools going out to market for an LMS seem less and less likely to take Blackboard serious as a contender, which is particularly bad news since a significant proportion of those schools are currently Blackboard schools. The losses have been incremental so far, but it feels like we are at an inflection point. The dam is leaking, and it could burst. Meanwhile, tensions are growing between Blackboard and Moodle HQ in an environment where Moodle is core to Blackboard’s international strategy and international is the main area where Blackboard is seeing growth. The first six months of Ballhaus’ tenure will be particularly important if the company is going to manage a turnaround without going through a period of freefall.

Here are some of the things that the new adminstration will need to accomplish in the next six months to reduce the changes of a disaster:

  • Visibly improve support for 9.x: Phil and I have been hearing a growing chorus of complaints that bugs in 9.x are not getting fixed and development of new features has been slow. Nothing sends customers running faster than a sense that the company just isn’t responsive to current needs. Blackboard needs to not only improve its support numbers but make sure customers know that it is doing so and making it a top priority for the foreseeable future.
  • Clarify messages around the future of 9.x and of managed hosting: Customers have been getting conflicting messages out of the company. We spoke to one who had been told outright that 9.x is no longer a priority by one VP and that 9.x is going to be strongly supported going forward by another VP. After I complained about their confused messaging on the future of managed hosting in the world of Ultra, Blackboard did take some steps to clarify. But I don’t believe that message got out broadly to the customers and, in any case, it is swamped by the overall confusion around 9.x versus Ultra.
  • Prove that Ultra is real: While there are customers who will not be quick to move off of 9.x (for a variety of reasons), nobody believes that the current platform represents a compelling future for digital learning environments. It is long in the tooth. But schools evaluating LMSs have largely discounted Ultra because they don’t think it’s real and they’re not convinced that it ever will be. Now that the product is a year late, they have increased reason to be skeptical. We have heard that there are schools piloting Ultra, but I am not aware of any public information about how these pilots are going or even which schools are participating. Blackboard needs to ship Ultra and trot out some customers who are willing to speak publicly about their experiences with it. If they fail, they will not get a do-over.
  • Keep the pedal to the metal internationally: Blackboard’s one bright spot at the moment is international growth. The new guy may (or may not) get a little more financial slack, but it will still be far from getting a blank check. International sales need to keep growing.
  • Resolve the tensions with Moodle HQ (one way or another): As Phil and I have written about before, Moodle is critical to Blackboard’s international growth, but there are growing signs of tension between Blackboard and Moodle HQ, the company that shepherds Moodle’s open source development. While this item is less of a “must do” than it is a “probably gonna happen,” I think it likely that Blackboard will either mend fences or go its separate ways in the next six months. Unresolved tensions are not good for either organization.
What to Watch For

If you’re a North American Blackboard customer, here are some signs to watch for over the next six months to help you get a sense of whether the new guy is going to work out:

  • Ballhaus gets out of the building: The first thing the new CEO will need to do is listen very loudly. Customers were feeling a lot of anxiety and confusion even before the announcement. If word doesn’t get out broadly that the CEO is visiting schools, asking good questions, taking notes, and following up, he will already be losing.
  • Clarity around 9.x support and development: This will probably take a little longer, but it needs to start happening before BbWorld. Customers need to start seeing support tickets closed faster. They need to hear what’s going to happen to their platform, how long they can stay on it and what will happen if they do. Upgrade path announcements will likely have to wait for BbWorld, but even if customers have to wait a little to hear about the future, they need to be reassured about the present.
  • A return of the annual report card: The last time Blackboard was trying to rebuild its relationships with customers, leadership instituted an annual report card, showing metrics that customers cared about and providing updates on the company’s progress (or lackthereof) toward meeting those metrics. It worked. Customers felt the company understood what was important to them and was holding itself accountable. Unfortunately, the tradition left with Ray Henderson. If you start seeing metrics again on stage at BbWorld, you will know that Ballhaus understands his customer confidence challenge.
  • Ultra customers on stage at BbWorld: At this point, Ultra customers are rarer than an ivory-billed woodpecker.[1] If there are customers willing to speak publicly about a positive experience with the product, then you can start hoping that it’s real.
  • Jon Kolko and John Whitmer on stage at BbWorld: First, any time Blackboard has the opportunity to show that it is capable of retaining a senior employee who has ear grommets and multiple tatts, it should. But beyond that, Kolko and Whitmer are the two people in the company who can clearly, credibly, and persuasively talk about a compelling, educationally richer future for the Blackboard platforms. If these two guys appear on stage, it will suggest that Ballhaus has figured out how to separate baby from bathwater.
7412641960_406dbf960b_b

Blackboard VP of Design Jon Kolko

A few weeks ago, during an ELI end-of-the-year webinar, I predicted that 2016 would be an eventful year for the LMS. We’re four days in so far.

Four days, people.

  1. Look it up.

The post What Blackboard’s New CEO Needs to Do Now (and how you can tell if he’s doing it) appeared first on e-Literate.

AppsLab Research in 2015

Oracle AppsLab - Mon, 2016-01-04 11:20

As we exit 2015 and enter 2016, I’m reflecting on all that happened in AppsLab and looking forward to the future. Our 2015 research spanned the spectrum – from attitudinal to behavioral, domestic to international, controlled to ad hoc, low to high tech, and many more research tactics. I won’t bore you with stats and an exhaustive list of studies. Rather, here is a brief recap of some of our research and interests.

We studied smartwatches a bit this year. We ran focus groups to gauge interest and identify use cases. We ran user journal studies to learn about user adoption and behavior patterns. We ran guerrilla usability studies with prototypes to evaluate features and interactions. We used stars and stickies to gather feedback. We used Oracle Social Research Management (SRM) to glean insight from social media.

CCqDd3UUMAABf__

Thao and Ben leading a focus group at HCM World in March.

CSRTjhmUcAA5oB2

Lo-fi research at the OAUX Exchange during OpenWorld in October.

CSa8JapUYAE-hOA

Ben, Tawny and Guido our guerrilla testing team at OpenWorld in October.

We designed and built a Smart Office, which we used to spark conversations and perspectives on the future of work and user experience. Ironically, we used low tech methods (with posters, stickies and stickers) to gather feedback on the high tech office.

CSSZvFOUkAEzyur

Smart Office demonstration at the OAUX Exchange during OpenWorld in October.

We also got out of the labs and headed to customers and partners in Europe and Asia for global perspectives.

CEYRknDVAAA3Kmv

Anthony Lai showing OAUX extensibility to a group of partners in Beijing in April.

To close out 2015 and start 2016, we opened the OAUX Gadget Lab, a hands-on lab where visitors will be able to come in and experience the latest technologies with us.

image1

The new Gadget Lab at Oracle HQ.

Stick around with us in 2016 to see what we are up to.Possibly Related Posts:

Blackboard Replaces CEO Jay Bhatt: What happened

Michael Feldstein - Mon, 2016-01-04 10:55

By Phil HillMore Posts (382)

Just over four years since Providence Equity Partners acquired Blackboard and three years after they brought in Jay Bhatt to replace co-founder Michael Chasen, the company announced another change in CEO. Blackboard has removed Jay Bhatt and replaced him with Bill Ballhaus. The official reason from the announcement:

Today, we are fortunate to be joined by a great leader – our new CEO Bill Ballhaus. Bill’s philosophy is directly in line with ours and his skill set is going to help us reach new heights. While this is certainly a change for Blackboard, rest assured that the heart of our mission and strategy will remain the same. [snip]

So we have defined our strategy and now, with Bill joining the company, we’ll continue to execute against it. Bill has accomplished much over his career and his operational expertise has led various businesses to great success. He and I share a fundamental belief that if you make your first priority taking care of your customers, the business results will follow. So, under his leadership Blackboard will continue our focus on doing just that. We will deliver next generation teaching and learning capabilities to the market, continue our international growth, and improve even further the way we serve our customers and strive to exceed their expectations. Bill is uniquely positioned to help us execute against these priorities, and with him we’ll achieve significant advances for our customers and for Blackboard.

While the official messaging is ‘full steam ahead’, to me this is a straightforward story that we have already been covering at e-Literate. In a nutshell, the attempted sale of Blackboard this year has failed, and the company has stalled in its turnaround attempts.

In Summer 2015 we learned from Reuters that Providence was actively putting Blackboard up for sale, a story we confirmed in August. At December’s meeting with journalists (somehow the e-Literate invitation was lost somewhere), Jay Bhatt seemed unprepared for a question about this sale as covered by Inside Higher Ed.

“Look, we’re private equity owned,” Bhatt said, adding that Blackboard’s current owners, Providence Equity Partners, are “really sympathetic” to the company’s cause. “But they are a private-equity investor, and they are looking for a return. Are we up for sale? Not necessarily. Are we always up for sale? Probably. Just like every other company in the public market is up for sale. Every time somebody buys a share of stock, they’re up for sale.” [snip]

“We will be sold at some point,” Bhatt said. “We’ll either be sold to the public market and be a public company like we were for 10 years, or we’ll be sold to another private investor. Something will happen so our investors can monetize, but it won’t change the strategy or the focus of our company.”

Beyond the muddled messaging, Bhatt’s answer strongly indicates that no buyer is really interested and that the attempted sale is not happening. The change in CEO is an additional confirmation to me, at least, that the attempts to sell Blackboard were unsuccessful and have been withdrawn.

The new management team brought on three years ago understood the need to make some major fixes to Blackboard, including re-architecting the core Learn product line, removing the company silos, cutting costs, and finding organic growth opportunities.

Based on private equity ownership and its current market position, however, Blackboard is caught between the need to invest and complete a product re-architecture that is highly complex and aggressive, and the financial requirements of highly-leveraged private equity ownership. The need to invest and the need to cut costs.

The results to date have been mixed.

  • We have described the efforts to redesign Learn Ultra – moving the core product line into a multitenant cloud architecture and rethinking (not just tweaking) the user experience. Blackboard is now at least a year late bringing this product redesign to market while putting out confusing messaging on what Ultra is and when it will be ready. The company is implementing new user experience designs for Collaborate and other products, but none of these matter that much without the core LMS, Learn. Recently the University of Phoenix parent company confirmed through e-Literate that they are replacing their homegrown learning platform with Blackboard Learn Ultra, scheduled for Summer 2016. This represents the biggest win for the Learn LMS in several years.
  • Blackboard has been quite active in removing company silos. Product development and support in particular have been reorganized to centralized teams, and sales of products have been grouped into bundles. In the fall, however, Blackboard reversed some of these changes and moved partially back into more independent product lines.
  • The company has held a wave of layoffs – several per year – since the Providence acquisition, and they have moved hundreds of product development jobs overseas to their Shanghai office. The cuts have led Blackboard to move its headquarters back to the early 2000s location, Blackboard recently signed a lease that will trim its corporate headquarters by 37%. Beyond layoffs, many key management and staff have been leaving Blackboard over the past year.
  • The only area seeing significant organic growth – not just as direct result of corporate acquisition – has been Blackboard’s international operations for both Learn and their Moodle Solutions. K-12 revenues have even dropped, and higher ed revenues are stagnant.

When Moody’s updated their ratings of Blackboard’s $1.3 billion in publicly-held debt in Spring 2015, we got confirmation of the financial status of the company. Revenues are stagnating, K-12 is even dropping, earnings have marginally increased, and debt ratios (debt-to-earnings in particular) are too high and could trigger a ratings downgrade. Blackboard is just not hitting their numbers.

We will look more deeply at the new CEO, Bill Ballhaus, and what his background might indicated for Blackboard’s future. For now, I’ll just note that Ballhaus was CEO of SRA International, a government IT services and solutions company acquired by Providence Equity Partners in 2011. After a corporate turnaround leading to increased revenue for the first time in 2014, SRA filed to go public in Summer 2015 but subsequently sold to CRC. Clearly Providence knows Ballhaus and has specifically brought him into Blackboard for a new attempt at turning around the ed tech company. Also, note that Ballhaus was also appointed Chairman of the Board.

Keep watching e-Literate for more coverage on this developing story.

The post Blackboard Replaces CEO Jay Bhatt: What happened appeared first on e-Literate.

Security Link Roundup - January 4, 2016

Mark Wilcox - Mon, 2016-01-04 10:50

January 4, 2016 Oracle Consulting Security Link Roundup

I'm Mark Wilcox.

The Chief Technology Officer for Oracle Consulting- Security in North America and this is my weekly roundup of security stories that interested me.

###

Database of 191 million U.S. voters exposed on Internet: researcher

So 2016 starts off with another headline of a database breach.

In this case 191 million records of US voters.

This is ridiculous.

And could have been prevented.

And a sobering reminder to contact your Oracle represenative and ask them for a database security assessment by Oracle consulting.

###
Secure Protocol for Mining in Horizontally Scattered Database Using Association Rule

Data mining is a hot topic - it's essential to marketing, sales and innovation. Because companies have lots of information on hand but until you start mining it, you can't really do anything with it.

And often that data is scattered across multiple databases.

In this academic paper from the "International Journal on Recent and Innovation Trends in Computing and Communication" the authors describe a new protocol that they claim respects privacy better than other options.

On the other hand - Oracle already has lots of security products (for example database firewall, identity governance) that you can implement today to help make sure only the proper people have access to the data.

So make sure to call your Oracle represenative and ask for a presentation by Oracle Consulting on how Oracle security can help protect your data mining databases.

###
A Guide to Public Cloud Security Tools

Cloud computing is happening.

And most people are still new to the space.

This is a good general article into the differences in security between public and private clouds.

Plus has a list of tools to help you with cloud security.

And if you are wanting to use cloud to host Oracle software - please call your Oracle represenative and ask them to arrange a meeting with Oracle Consulting Security to talk about how Oracle can help do that securely.

###
Survey: Cloud Security Still a Concern Heading into 2016

Security continues to be the biggest concern when it comes to cloud.

While there are challenges - I find securing cloud computing alot simpler than on-premise.

Assuming your cloud hosting is with one of the major vendors such as Oracle or Amazon.

And if you are wanting to use cloud to host Oracle software - please call your Oracle represenative and ask them to arrange a meeting with Oracle Consulting Security to talk about how Oracle can help do that securely.
###
">40% BUSINESS DO NOT USE " SECURITY ENCRYPTION" FOR STORING DATA IN CLOUD

"Holy crap, Marie."

I watch a lot of reruns of "Everybody Loves Raymond" and I feel like this story is another rerun.

Except unlike Raymond this is a rerun of a bad TV show.

Encrypting a database is one of the best ways to secure your data from hackers.

So before you start storing data in the cloud, in particular with an Oracle database make sure you have Oracle Consulting do a security assessment for you.

That way you can know what potential problems you have before you start storing sensitive production data.

###
image credit unsplash.

ANSI bug

Jonathan Lewis - Mon, 2016-01-04 07:12

In almost all cases the SQL you write using the ANSI standard syntax is tranformed into a statement using Oracle’s original syntax before being optimised – and there are still odd cases where the translation is not ideal.  This can result in poor performance, it can result in wrong results. The following examples arrived in my in-tray a couple of weeks ago:

with
    table1 as ( select 1 my_number from dual ),
    table2 as ( select 1 my_number from dual )
select *
    from (
        select sum(table3.table2.my_number) the_answer
            from table1
            left join table2 on table1.my_number = table2.my_number
            group by table1.my_number
        );


with
    table1 as ( select 1 my_number from dual ),
    table2 as ( select 1 my_number from dual )
select sum(table3.table2.my_number) the_answer
    from table1
    left join table2 on table1.my_number = table2.my_number
    group by table1.my_number;

Notice the reference to table3.table2.my_number in the select list of both queries – where does the “table3” bit come from ? These queries should result in Oracle error ORA-00904: “TABLE3″.”TABLE2″.”MY_NUMBER”: invalid identifier.

If you’re running 11.2.0.4 (and, probably, earlier versions) both queries produce the following result:


THE_ANSWER
----------
         1

1 row selected.

If you’re running 12.1.0.2 the first query produces the ORA-00904 error that it should do, but the second query still survives to produce the same result as 11.2.0.4.


Video Tutorial: XPLAN_ASH Active Session History - Part 7

Randolf Geist - Mon, 2016-01-04 03:00
The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.

More parts to follow.

Oracle Application Management Pack for Oracle Utilities 13.1.1.1.0 available

Anthony Shorten - Sun, 2016-01-03 21:27

We are pleased to announce that a new version of the Oracle Application Management Pack for Oracle Utilities has been released to support the new release of Oracle Enterprise Manager 13c. We are excited to offer this new pack which now supports the new features of Oracle Enterprise Manager including:

  • The user interface has been updated to reflect the new Alta look and feel implemented by Oracle Enterprise Manager
  • The Always On feature is now supported that is used by Oracle Enterprise Manager to drastically reduce downtime for Oracle Enterprise Manager or pack maintenance
  • The System Broadcast feature is now supported allowing broadcast across all Oracle Enterprise Manager users
  • Support for Brownouts is now included where non-scheduled outages are now calculated separately for Service Level Agreement checking
  • and many more...

The functionality of the pack is the same as the latest release of the pack for Oracle Enterprise 12c for backward compatibility reasons. This pack requires Oracle Enterprise Manager 13c. The new version of the pack is available from Self Update within Oracle Enterprise Manager 13c and Oracle Software Delivery Cloud.

A new release of the pack is also scheduled in the near future with additional functionality to fully exploit additional new and exciting features of Oracle Enterprise Manager 13c. For more information about Oracle Enterprise Manager 13c refer to the EM blog post.

Product vs Solution vs Infrastructure

Anthony Shorten - Sun, 2016-01-03 20:29

One of the most common questions I get from partners is support for features that are typically in the infrastructure. The main issue here is that some partners confuse what is in the product and what is in the infrastructure and the implementation solution. Let me explain.

The Oracle Utilities Application Framework based products are applications housed within J2EE infrastructure (such as Oracle WebLogic and in some cases IBM WebSphere) and for batch, housed in a runtime version of Oracle Coherence.

Now there is a degree of separation between the product and the infrastructure. Each has distinct roles and those roles are only duplicated across what we call touchpoints between the product and the infrastructure. Another complication comes into play is the role of the solution which the particular configuration of the product and the infrastructure to suit a particular need.

When I was considering writing this article to highlight the differences in product, infrastructure and solutions I bounced around a few ways of describing it but I found the nest way is in the form of a common example.

Lets use the example of security authentication (aka who are you?). This is essentially the feature of securing and identifying the user when connecting to the product. The most common example of this is known as challenge and response (or more commonly userid and password).

In terms of the roles security authentication is described as follows in terms of product, infrastructure and solution:

  • The product does not store userid and password itself. It does not make sense in the context of an enterprise application as typically security is enterprise wide, for efficiency reasons, not just for a particular product. This is delegated to the J2EE container (Oracle WebLogic/IBM WebSphere) to authenticate the user. The product relies on the container to pass or fail an authentication attempt.
  • The J2EE container, which is part of the infrastructure, supports various security repositories and standards via security connectors. For example, if you have a corporate security server that holds users and passwords then you can connect it via LDAP to the container to now implement a common identity store. The J2EE container supports a wide range of adapters and in the case of Oracle WebLogic you can implement multiples for different parts of your business. An example of this is where you can separate administration accounts from common users using different identity stores.
  • A solution for the product is a distinct configuration of the J2EE container with appropriately configured security connectors. This can also mean that you externalize this function even further by implementing an Identity Management solution such as Oracle Identity Management Suite.

As you see in the example, there are distinct differences between the product, solution and infrastructure. You can apply the same logic to a wide range of implementation aspects needed to be considered.

Now, lets focus on a particular issue using the example above. Where should the users be able to change their password?

  • The product does not have inbuilt password change functionality. This is because in a solution context, it makes no sense. This is why we do not supply one. It does not mean you cannot add this functionality to the menu as a common function.
  • The product is always connected to a security repository via the J2EE container (even the default one shipped with the J2EE container). The password change function is at the infrastructure level not the product level.
  • Typically you can change passwords from external sources which is much more logical. Lets take the common example of reusing the same security repository for LAN login and the product (via a common LDAP source with or without SSO). If you use this example, typically the LAN login allows you to change your password which would apply to all connected applications. It makes no sense in this example to also duplicate the functionality in the product. Also why would you let the product change a security repository.

The above example brings the discussion into sharp focus.

Now, how do I deal with these situations? I call it "What would product <blank> do in this situation?", where <blank> is your favorite desktop application. I usually use Office as an example (not a great example but something most people understand). You would not expect Word or its equivalent to have a password maintenance function. No, it does not make sense. Word in this example, uses the features of the operating system to do all sorts of functions like printing, scanning etc... The application does not have all these functions inbuilt (otherwise it would not be a word processor really).

Hope this clarifies the situation.

Password Expiry Issue

Online Apps DBA - Sun, 2016-01-03 09:05

We were trying to start IDM services but services were failing with error as shown in below screen shot

Inline image 1

 

1.     As it was showing issue with OPMN services so we tried to start opmn services manually but it

was failing to start opmn services We looked into opmn log file log (located under

$ORACLE_INSTANCE/diagnostics/logs/OPMN/opmn). And it was showing below issue.

 

[2015-11-17T01:33:48-05:00] [opmn] [TRACE:1] [536] [pm-workers] Job 0 0 result: [[

oid1~oid1~oidmon~OID~2045482973:2865

Status: Stop

Operation: internal (time out while waiting for a managed process to stop)

ErrFile: /u02/oracle/config/instances/oid1/diagnostics/logs/OID/oid1/console~OID~1.log

String: second stop attempted

 

]]

 

[2015-11-17T01:33:48-05:00] [opmn] [NOTIFICATION:1] [663] [pm-process] Stopping

Process: oid1~oidmon~OID~1 (2045482973:2865)

2.     After that we looked into oidmon.log file located under (/u02/oracle/config/instances/oid1/diagnostics/logs/OID/oid1)

[2015-11-17T02:04:27-05:00] [OID] [NOTIFICATION:16] [] [OIDMON] [host: iamdemo04.k21technologies.com] [pid: 4425] [tid: 0] Guardian: [oidmon]: Unable to connect to database,

will retry again after 10 sec

[2015-11-17T02:04:37-05:00] [OID] [NOTIFICATION:16] [] [OIDMON] [host: iamdemo04.k21technologies.com] [pid: 4425] [tid: 0] Guardian: Connecting to database, connect string is oiddb

[2015-11-17T02:04:37-05:00] [OID] [NOTIFICATION:16] [] [OIDMON] [host: iamdemo04.k21technologies.com] [pid: 4425] [tid: 0] Guardian: [gsdsiConnect] ORA-28002, ORA-28002: the password will expire within 5 days

[2015-11-17T02:04:37-05:00] [OID] [NOTIFICATION:16] [] [OIDMON] [host: iamdemo04.k21technologies.com] [pid: 4425] [tid: 0] Guardian: [oidmon]: Unable to connect to database,

will retry again after 10 sec

 

So as shown in logs above  there was some issue with Database users.

Temporary Fix

For fixing the issue temporarily, reset the password of expired users and to check the status of

users, run the below command after connecting to database.

 

SQL> select USERNAME,ACCOUNT_STATUS, LOCK_DATE, EXPIRY_DATE form dba users;

 

After that run below command to reset the password

SQL> alter user <username> identified by <password>;

For Example

SQL> alter user ODS identified by k21technologies;

       

         Permanent Fix

                       For permanent fix of this issue, create one password policy and assign it to database users

as shown in below screenshot after resetting the password

 

SQL> CREATE PROFILE FMWNOLOCK LIMIT FAILED_LOGIN_ATTEMPTS UNLIMITED PASSWORD_LIFE_TIME UNLIMITED;

SQL>  ALTER USER ODS PROFILE FMWNOLOCK;

 

 

The post Password Expiry Issue appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Happy New Year 2016 , best wishes to all

OracleApps Epicenter - Sat, 2016-01-02 01:49
This the season to be jolly! Time truly flies when you are doing the things you love and with another year behind us, we can't help but feel a little nostalgic and look back at what the past twelve months have brought us. It was a busy year at personal and professional side .In terms […]
Categories: APPS Blogs

Oracle Management Cloud : The Next Generation Real-Time Monitoring and Analytics IT Tool

OracleApps Epicenter - Sat, 2016-01-02 01:21
Oracle Management Cloud (OMC) is a suite of next-generation integrated monitoring, management, and analytics cloud services built on a scalable big data platform that provides real-time analysis and deep technical and business insights. With OMC you can eliminate disparate silos across end-user and infrastructure data, troubleshoot problems quickly,and run IT like a business OCM meets […]
Categories: APPS Blogs

A Few Resolutions for 2016

Joel Kallman - Fri, 2016-01-01 09:15


Jenny, from the Oracle Database Insider Newsletter, asked a number of us in the Database division at Oracle to share our New Year's resolutions for 2016.  And while I'm a bit reluctant to share this somewhat personal information, I like the fact that publicizing these resolutions may force me to remain a bit more focused on these goals.  So here goes...my resolutions for 2016:


  1. Attend an Oracle Real World Performance Training class.  I thought I knew a fair amount about the Oracle Database, SQL and tuning. But at a conference in 2015, I was able to spend some quality time around Vlado Barun from the Oracle Real World Performance team, and it quickly become clear I knew very little compared to these folks. I’m asked to diagnose “APEX issues” all the time, and the vast majority of cases are simply database configuration or SQL tuning exercises.  To become a better database developer, I need to become deeper in my understanding of the Oracle Database and performance.
  2. Broaden the message of APEX, Database and Oracle Cloud development to those we’re not reaching today.  And I specifically would like to share our message with higher education institutions and students attending university.  Developing Web and responsive applications is cool and I believe the combination of technologies (SQL, PL/SQL, APEX, Oracle Database, Cloud, REST) results in an incredibly rich application development platform.  University students probably think of “big, bad corporate” when they hear the word “Oracle”.  I want them to think “hip, cool, innovative, modern”.
  3. Be more patient and understanding of those who ask me questions.  I can actually credit a customer (Erik van Roon) who helped me to recalibrate my understanding on this topic.  Sometimes I’ll get questions where it’s clear someone hasn’t done the least bit of research into the topic.  And it was at those rare times when (to a fellow employee, never a customer), I’d reply with a lmgtfy.com link.  But as Erik correctly pointed out - I have 20 years experience, and they don’t.  Arrogance may not be the message I intend to send, but it may very well be the message that is received.  And that’s not how I wish to be perceived by anyone, ever.  Thus - time to drop my impatience and arrogance, for every occasion.
  4. Spend more time with my family.  2015 was a great year for Oracle Application Express, and I’ve never worked harder in my career than I did in 2015.  But that has a price, and I value the finite time with my family more than anything else.  While I love working for Oracle and I dearly love the team I’m blessed to work with, I value my family even more.  And I need to define a bit more rigid boundaries between work and family time.
  5. Read a novel.  When I read, it’s usually one of the following:  the Bible, a functional specification, a military history book, a computer programming/Web design book or the Wall Street Journal. My wife is an avid reader and gets such joy from well-written and captivating novels.  I’d like to expand my imagination (and vocabulary), and be able to set aside time for some reading at leisure.
  6. Learn a language.  I’ve dabbled back and forth with German over many years.  And I know enough German to order food in a restaurant.  But I’m not fluent enough for even the shortest of conversations in German. It’s time to either forge ahead with my self-study of German and practice it with the 3 native German speakers on the APEX team, or simply switch gears and direct my focus to Spanish which is probably much more practical, living in America.
  7. Exercise at least 3 times a week.  The older I get, the easier it is to gain weight and get out of shape, and the more difficult it is to lose it and get back in shape.  And by "exercise", I don't mean walk around the block.  Instead, I'm referring to something that causes you to sweat - running, biking, jumping rope, or resistance exercises (the Total Gym will work just fine!).  While I fantasize about training enough to run a 1/2 marathon in 2016, I'll be happy enough to just consistently exercise 3 times a week.
These are the goals.  Some are easy.  Some will span the entire year.  I probably won't meet them all, but they're a goal.
What are your goals for 2016?

Expert

Jonathan Lewis - Fri, 2016-01-01 07:02

I was sent the following email a few years ago. It’s a question that comes up fairly frequently and there’s no good answer to it but, unusually, I made an attempt to produce a response; and I’ve decided that I’d start this year by presenting the question and quoting the answer I gave so here, with no editing is the question:

I’m disturbing you for some help about becoming an Oracle master expert. Probably you are getting this kind of emails a lot but I would be appreciate if you give a small answer to me at least.

First, shortly I want to introduce my self. I’m an *Oracle Trainer* in Turkey Oracle University for 2 years. Almost for 4 years, I worked as software engineer and meet with Oracle on these days. After a while I decided to develop myself in Oracle database technologies and become trainer as i said. I also give consultancy services about SQL / PLSQL development and especially* SQL / PLSQL tuning*. I really dedicate myself to these subjects. As a trainer I also give DBA workshop lectures but in fact I didnt actually did dba job in a production system. I have the concept and even read everything I found about it but always feel inadequate because didnt worked as a DBA on a production system. So many DBA’s has taken my class and they were really satisfied (they have got all answers for their questions) but I did not. I’m a good trainger (with more that 97 average points in oracle evaluations) but I want to be best.

Even in SQL / PLSQL tuning, I know that I am really good at it but I also aware that there are some levels and I can not pass through the next level. for ex: I can examine execution plan (index structures, access paths etc), find cpu and io consumption using hierarchical profiler and solve the problem but can’t understand yet how to understand how much IO consumed by query and understand slow segments. if you remember, for a few days ago, on OTN you answered a question that I involved about sequence caching and Log file sync event. There, I said that sequence can cause to log file sync event (and as you said that was true) but when someone else write a simple code and couldnt see this event, I couldnt answer to him, you did (you said that it was because optimizing).

that is the level what i want to be. I am really working on this and age on 29. but whatever I do I cant get higher. I need a guideness about that. I even worked free for a while (extra times after my job here). I need your guideness, as I said I can work with you if you want to test and I want to learn more advanced topics while working. In Turkey, I couldn’t find people who can answer my questions so I can not ask for guideness to them.

And my (impromptu, and unedited) reply:

Thank you for your email. You are correct, I do get a lot of email like this, and most of it gets a stock response; but yours was one of the most intelligently written so I’ve decided to spend a little time giving you a personal answer.

Even if you were to spend a few years as a DBA, you would probably not become the sort of expert you want to be. Most DBAs end up dealing with databases that, for want of a better word, we could call “boring”; for a database to be interesting and show you the sorts of problems where you have to be able to answer the types of question I regularly answer you probably need to be the DBA for a large banking or telecoms system – preferably one that hasn’t been designed very well – that has to handle a very large volume of data very quickly. On these extreme systems you might find that you keep running into boundary conditions in Oracle that force you to investigate problems in great detail and learn all sorts of strange things very quickly. On most other systems you might run into a strange problem very occasionally and spend several years on the job without once being forced to solve any difficult problems very quickly.

If you want to become an expert, you need to be a consultant so you get to see a lot of problems on lots of different systems in a very short time; but you can’t really become a consultant until you’re an expert. As a substitute, then, you need to take advantage of the problems that people report on the OTN database forum – but that doesn’t mean just answering questions on OTN. Look for the problems which people have described reasonably well that make you think “why would that happen”, then try to build a model of the problem that has been described and look very closely at all the statistics and wait events that change as you modify the model. Creating models, and experimenting with models, is how you learn more.

Take, for example, the business of the sequences and pl/sql – you might run the test as supplied with SQL_trace enabled to see what that showed you, you could look very carefully at the session stats for the test and note the number of redo entries, user commits, and transactions reported; you could look at the statistics of enqueue gets and enqueue releases, ultimately you might dump the redo log file to see what’s going into it. Many of the tiny little details I casually report come from one or two days of intense effort studying an unexpected phenomenon.  (The log file sync one was the result of such a study about 15 years ago.)

 Happen new year to all my readers.


Happy New Year 2016!

Tim Hall - Fri, 2016-01-01 06:31

New-Year-Eve-2016Happy New Year to everyone! Yes, even you!

I’m not big on new years resolutions, since I always end up breaking them on the first visit to the 24 hour Tesco store down the street! So in a similar vein to a post I wrote in 2012, here is my mission statement for the year!

Content

  • Keep doing the website and the blog. Hopefully this should be one I actually achieve. :)
  • Try to keep making videos for my YouTube channel. I enjoy doing the videos, but they take so long to produce it’s really not a good use of my time. It would be easier if I quit everything else and just focused on it, but that wouldn’t really make sense, but then again…
  • Diversify. I’m not talking about a full on change of direction, but I shouldn’t be scared to try different things out. If they work, great. If they fail, move on. Sounds so simple, but does always feel that way.

Fitness

  • I’m starting 2016 the heaviest I’ve been in a long time, over 250 pounds or over 115 Kg for you metric types. At my age and with my medical history, it’s really not a great place to be. You gotta eat less if you wanna see 2017 man!
  • Keep going to the gym. I love going to the gym and I like chucking round loads of weight. I’ve probably got to hold back a bit more for the sake of my wrists, elbows and shoulders.
  • Keep stretching, but pay some more attention to back flexibility and general posture. There’s no point having great posture for a few minutes of yoga, then slouching for the rest of the day.
  • More cardio! Swimming is the only cardio I enjoy, but to put it mildly, I dislike other swimmers. I should start walking more.

So really, this year has to be a year of moderation in everything to do with fitness. Especially where food is concerned.

Work

This is weird one for me, because basically I just shouldn’t work. I’m good at the technical side of IT, but I am terrible at the politics and bullshit. What would make me happy is to quit my job and go back to the life I had for the 4 years before I started working at this place. Just sitting at home, playing with tech and writing about it, with the odd conference thrown in for good measure.

The problem is, writing about technical stuff when you are not using it daily in your job is bullshit. You end up in this little bubble of idealism and totally lose touch with the day-to-day grind that most developers and DBAs have to deal with.

I need to work so that I stay connected with reality, which has a beneficial effect on my content.

Personal

Just “do me” and forget about the haters. The more popular you get, the more haters you acquire. The internet is a toxic place and you’ve just got to try and ignore them.

I think that will do for now! :)

Have a good year everyone, and I hope you achieve at least a few of your goals for the year!

Cheers

Tim…

Happy New Year 2016! was first posted on January 1, 2016 at 1:31 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Happy New Year 2016

Senthil Rajendran - Thu, 2015-12-31 20:28

Big Data Co-op Experience at Pythian

Pythian Group - Thu, 2015-12-31 11:40

 

I joined Pythian as a Big Data co-op for duration of 4 months. At the time I joined, I had only heard of Big Data as a buzzword in the technical community. So when I was selected for the job I was quite nervous and at the same time very excited and eager to learn.

During my first 2 weeks, I attended BORG sessions with department heads to learn about all departments and how company is run. My favourite one was the session with CEO Paul Vallée where he talked about how he started the company in his basement and his vision for the future. I was repeatedly hearing “Pythian hires the top 5% talent in the industry”, which was intimidating thinking that I’m surrounded by brilliant people and I didn’t know much. However, that feeling didn’t last very long once I got to know my coworkers. Everyone at Pythian is very helpful and there are several mediums for knowledge transfer among employees. Pythian is constantly hiring so if you think you’re new and feel hesitant at first, remember there will be someone newer than you in 2 weeks or less.

Some of the exciting events I encountered during my work-term included the Pythian days, Quarterly all hands meeting, Pythian Future Leaders session for career growth, the holiday party and of course the headquarter move from St-Laurent to Westboro. I also enjoyed the weekly Pythianology sessions which are a great way to learn about best practices for dealing with data.

Being part of the Big Data team, I had to learn to work in a distributed environment. I worked from the Ottawa headquarters while my team members worked remotely from other cities in Canada and worldwide. This is undoubtedly one of the most important skills I learned during my time at Pythian.

Coming to an end of my work-term, I wouldn’t say I’m an expert in Big Data and ready to deal with customers just yet, but I sure learned the type of challenges that customers face. I like to tackle problems on my own, so I strived to get things running with help from Google, Stackoverflow and the Pythian blog before I asked team members for help. Every now and then I would hit a roadblock in which case my team members would guide me in the right direction. An important lesson I learned is that theoretical knowledge is just as important as practical knowledge. As a programmer, I had trained myself to learn by practice so I wanted to get started with my project as quickly as I could. In doing so I skipped over some of the reading I should have focused on. Due to this I struggled with errors that could have been avoided and saved me a lot of time. In my experience, dealing with Big Data technologies is very different from programming (at least, at first). Before you can get to the programming stage, you need to deal with A LOT OF configurations. I’m happy to say that I not only learned about technologies directly related to Big Data like hadoop, MapReduce, Hive, Oozie, Hue, CDH etc. but learned a lot more. In the last 4 months I have dealt with at least 4 flavours of Operating Systems including Windows 7, windows 10, Ubuntu and CentOs. I have worked with Virtual Machines running locally on my system and Virtual machines running on the cloud. I worked mostly on Google Cloud but at some point for a brief period of time explored Amazon Web Services and Microsoft Azure services. I also played around with docker images which I found to be pretty cool and plan to keep using them in future.

I confess that if I had explored Big Data technologies on my own time, I would have given up very early on, but being at Pythian allowed me to stay committed to achieving my goal. I was allowed to learn at my own pace while experimenting and failing repeatedly. I got help from my team members when needed, which helped me to achieve my goal for the work-term. Being new to Big Data can be overwhelming seeing how fast the field is growing and hearing about the large variety of technologies there are. An important piece of advice I received is to not try and learn too much at once but take one technology at a time and understand it well. Having gotten this far, I’m motivated to dive in deeper and to explore things further to improve my skills.

I highly recommend students to join Pythian for coop terms for a chance to learn from industry leaders worldwide. At Pythian you will not be treated as an intern but as a full-time employee.  You will gain knowledge about technologies that are not covered in school. You will get an opportunity to not only learn technologies relevant to your area of interest, but develop transferable skills which will be beneficial in several fields.

Categories: DBA Blogs

MySQL Benchmark in the Cloud

Pythian Group - Thu, 2015-12-31 10:50

 

Testing functionalities and options for a database can be challenging at times, as a live production environment might be required. As I was looking for different options, I was directed by Derek Downey to this post in the Percona blog.

The blog discussed an interesting and fun tool from Percona, tpcc-mysql. I was interested in testing the tool so I decided to play around with it in an AWS EC2 server.

In this post I will expand on the Percona blog post, since the tool lacks documentation, as well as explain how I used it to create a MySQL Benchmark in AWS.

Why tpcc-mysql?

There are various reasons why tpcc-mysql could be a good option to use for a benchmarking project. The following points highlights most of them:

Pros:

  • Mimics a full DB structure of a real warehouse.
  • Simulates a real life load on the server.
  • Options and flexibility.
  • Very light footprint on the system.

Cons:

  • No documentation.
Getting the Server Started

You’ll probably need to launch a new EC2 server from the AWS Console, or use an existing one that you already have up an running. Either way, you had better save the current state of your database. Luckily, AWS EBS offers really good and convenient solution to achieve this.

It is possible to create and manage sanpshots of EBS volumes in the AWS Dashboard with some very basic steps. I personally prefer to setup the MySQL base and data directories together in a different volume from from the root volume. This allows me to swap between different versions and data-sets without having to reconfigure my tools every time I load a snapshot.

EBSsnapshot

Writing a  good description helps when creating new volumes.

NewVol

Possible suggestions come up as you start typing based on descriptions .

 Setting up the Benchmark

Once you have taken your snapshot and configured you MySQL, move on to setup. First we’ll need to setup the prerequisites.

tpcc-mysql uses mysql_config  is part of the libmysqlclient_dev package. We also need Bazaar. So we’ll go ahead and install that:

sudo apt-get install libmysqlclient_dev
sudo apt-get install bzr

 

Install & Compile spcc-mysql

Use following commands to download the tpcc-mysql source code and compile it:

bzr branch lp:~percona-dev/perconatools/tpcc-mysql
cd tpcc-mysql/src
make all

 

Prepare the Database & Create Required Tables

Once the the tpcc-mysql has been compiled, we will need to prepare the database for the benchmark. This will consist of running a few scripts to create the required database, tables, and generate random data to use during the testing process.

Following these steps will create the database and tables made for us, they are all part of the tpcc-mysql package:

cd ~/tpcc-mysql
# 1. Create Database to be load data in 
  mysql -u root -p -e "CREATE DATABASE tpcc1000;"
# 2. Create the required table definitions 
  mysql -u root -p tpcc1000 < create_table.sql
# 3. Add foreign keys and indexes  
  mysql -u root -p tpcc1000 < add_fkey_idx.sql

The following tables are created from the previous step:

$ mysql -u root -p tpcc1000 -e "SHOW TABLES;"
Enter password:
+--------------------+
| Tables_in_tpcc1000 |
+--------------------+
| customer           |
| district           |
| history            |
| item               |
| new_orders         |
| order_line         |
| orders             |
| stock              |
| warehouse          |
+--------------------+

As you can see, tpcc-mysql mimics a warehouse’s database that tracks clients, items, orders, stock, … etc

Prepare the Database & Create Required Tables

The last step remaining before we can start our test is to populate some data into the tables. For that, tpcc-mysql has a script, tpcc_load, that does the job.

The tpcc_load script generates random dummy data in the tables created in the previous steps. The script also have a parameter that allows to specify how many warehouses you want to simulate.

The script usage is as follow:

tpcc_load [server] [DB] [user] [pass] [warehouse]

In our example, we’ll use the following:

./tpcc-mysql/tpcc_load 127.0.0.1 tpcc1000 root "$pw" 2
Beginning the Benchmarking Process

This would be a good time to take a snapshot of your server/dataset, so you can come back to it. Also, before we get started, let’s get familiar with the script we need to use for starting the benchmarking process, tpcc_start. 

The script will start creating transactions that would execute various statements like SELECT, UPDATE, DELETE, and INSERT. The script will also be generating a detailed output of the progress and a summary in the end. You can redirect this output to a file to run some analysis, compare it later on, or use it to run an analysis.

The script comes with various parameters to give you flexibility to configure it as you desire:

tpcc_start -h[server] -P[port] -d[DB] -u[mysql_user] -p[mysql_password] -w[# of warehouses] -c[# of connections] -r[warmup_time] -l[running_time]

Now let’s get to the fun part!

We’ll be using the following command will start a simulation of warehouse transactions, and record the output in the file tpcc-output-01.log

./tpcc_start -h127.0.0.1 -dtpcc1000 -uroot -p -w2 -c16 -r10 -l1200 > ~/tpcc-output-01.log
Analyzing the Output

tpcc-mysql comes with different scripts that could be used for analysis. Check the tpcc-mysql/scripts folder. Example of some scripts are:

$ ls ~/tpcc-mysql/scripts/
analyze_min.sh   
analyze.sh           
anal.full.sh     
analyze_modified.sh  
...
...

Visual Analysis of the Output

We can always take these tests a step further in many different directions. Since plotted data is a lot of fun, why not do a quick experiment with it?

The same blog post I used as my reference for this post also has a modified version of analyze.sh script that comes with tpcc-mysql. The script is named tpcc-output-analyze.sh. What this script does is that it extracts the time and # of transactions for each time block in a format that gnuplot can read for plotting the data. So let’s use the script on the output file:

./tpcc-output-analyze.sh tpcc-logs/tpcc-output-01.log tpcc-analyzed/time_tr_data_01.txt

To install gnuplot you simply run:

sudo apt-get install gnuplot

Then, we can create the plot using the tpcc-graph-build.sh  script (from here as well) as follows:

./tpcc-graph-build.sh tpcc-analyzed/time_tr_data_01.txt tpcc-graphs/graph01.jpg

 And this generated the following plot for me:
plot

Conclusion

I hope this was helpful. As you can see, there is a lot of potential of things that can be done using tpcc-mysql. If there is anything that you come up with or experiment with, I would love to hear it from you.

 

Discover more about our expertise in MySQL and the Cloud.

Categories: DBA Blogs

SQL On The Edge #6 – SQL AlwaysEncrypted

Pythian Group - Thu, 2015-12-31 10:42

Security is on everyone’s mind these days in the IT (and the real) world. Either because they’re dealing with compliance, risks or mitigation, etc. at work or because they just saw on the news yet another item about some big leak/breach happening. It is said that it’s not a question of if your systems will be attacked but when. As part of the SQL product family, Microsoft has now released a new feature called AlwaysEncrypted to continue risk mitigation and strengthen the security story of the product. And I mentioned the SQL ‘product family’ instead of just SQL Server because this feature is also available on Azure SQL Database.

 

What is it?
AlwaysEncrypted is the latest in the set of features that enables encryption inside SQL Server. Let’s look at the list so far:

  • Column level encryption
    This targets specific columns in specific tables, with the encryption/decryption happening at the server.
  • Transparent Database Encryption (A.K.A TDE): This targets entire databases and is transparent to the calling application. It’s also transparent to any user with proper access to the data.
  • AlwaysEncrypted: This also targets specific columns in specific tables, with the encryption/decryption happening ON THE CLIENT.

This is the big difference of this new feature, that the operations to encrypt/decrypt happen on the client NOT on SQL Server. That means that if your SQL Server is compromised, the key pieces to reveal the data are NOT with the server. This means that even if your DBA wants to see the data, if they don’t have access to the CLIENT application then they won’t be able to see the values.

 

How Does it Work?
This feature can be enabled through T-SQL or through a wizard in Management Studio. The actual data manipulation is done by the latest version of the ADO .NET client and during configuration, the client will read all of the data, perform the encryption and send it back to SQL Server for storage. The latest 4.6 release of the .NET framework is required. There’s a Column Master Key that will have to be stored in a Windows certificate store, Azure Key Vault or other 3rd party key storage software. During normal application operation, the ADO client will read this master key and use it to decrypt and encrypt the values.

There are two options for this type of encryption:

  1. Randomized
    This will make the same source values encrypt into DIFFERENT encrypted values. Useful for columns that could be correlated by looking at them and won’t be used for searching.
  2. Deterministic: This will make the same source values encrypt into the SAME encrypted values, thus allowing for indexing and searching.

 

For the demo, check the video below where we’ll use the SSMS Wizard to enable AlwaysEncrypted on a column and will show the decryption happening in SSIS using the ADO .NET client!

Enjoy!

 

Discover more about our expertise in SQL Server.

Categories: DBA Blogs