Skip navigation.

Feed aggregator

Inside Higher Ed: One year after selling majority stake in company

Michael Feldstein - Tue, 2016-01-19 13:01

By Phil HillMore Posts (383)

One year ago I wrote a post critical of Inside Higher Ed for not doing a blanket disclosure about the sale of a majority stake to a private equity firm with other education holdings (most notably Ruffalo Noel Levitz).

Subsequent to the disclosure from the Huffington Post, IHE put up an ownership statement disclosing the ownership change and calling out that only editors are involved in editorial policies. The About Us page prominently links to this ownership statement.

In an interview with Education Dive, Scott Jaschik (an Inside Higher Ed founder and editor) noted his regret for not disclosing the sale up front while concluding:

“I guess I would just say to anyone who has questions, read us and read our coverage and call me if you think we’re doing anything that we shouldn’t,” [Jaschik] said.

In the past year I have done exactly that – watching carefully for editorial shifts, complaining publicly about one article, and privately emailing Jaschik on another issue.

My conclusion? Inside Higher Ed has shown no bias and no change in editorial policies based on the new ownership – they are living up to their word. IHE [Jaschik in particular] has also been quite good in discussing any questions or issues based on their coverage. IHE should be commended for their quality coverage of higher education news.

 

The post Inside Higher Ed: One year after selling majority stake in company appeared first on e-Literate.

Drop table cascade and reimport

Laurent Schneider - Tue, 2016-01-19 12:26

Happy new year :)

Today I had to import a subset of a database and the challenge was to restore a parent table without restoring its children. It took me some minutes to write the code, but it would have taken days to restore the whole database.

CREATE TABLE t1(
  c1 NUMBER CONSTRAINT t1_pk PRIMARY KEY);
INSERT INTO t1 (c1) VALUES (1);
CREATE TABLE t2(
  c1 NUMBER CONSTRAINT t2_t1_fk REFERENCES t1,
  c2 NUMBER CONSTRAINT t2_pk PRIMARY KEY);
INSERT INTO t2 (c1, c2) VALUES (1, 2);
CREATE TABLE t3(
  c2 NUMBER CONSTRAINT t3_t2_fk REFERENCES t2,
  c3 NUMBER CONSTRAINT t3_pk PRIMARY KEY);
INSERT INTO t3 (c2, c3) VALUES (2, 3);
CREATE TABLE t4(
  c3 NUMBER CONSTRAINT t4_t3_fk REFERENCES t3,
  c4 NUMBER CONSTRAINT t4_pk PRIMARY KEY);
INSERT INTO t4 (c3, c4) VALUES (3, 4);
COMMIT;

expdp scott/tiger directory=DATA_PUMP_DIR dumpfile=scott.dmp reuse_dumpfiles=y

Now what happen if I want to restore T2 and T3 ?

If possible, I check the dictionary for foreign keys from other tables pointing to T2 and T3.

SELECT constraint_name
FROM user_constraints
WHERE (r_constraint_name) IN (
    SELECT constraint_name
    FROM user_constraints
    WHERE table_name IN ('T2', 'T3'))
  AND table_name NOT IN ('T2', 'T3');

TABLE_NAME                     CONSTRAINT_NAME               
------------------------------ ------------------------------
T4                             T4_T3_FK                      

T4 points to T3 and T4 has data.

Now I can drop my tables with the cascade options

drop table t2 cascade constraints;
drop table t3 cascade constraints;

Now I import, first the tables, then the referential constraints dropped with the cascade clause and not on T2/T3.

impdp scott/tiger tables=T2,T3 directory=DATA_PUMP_DIR dumpfile=scott.dmp

impdp scott/tiger  "include=ref_constraint:\='T4_T3_FK'" directory=DATA_PUMP_DIR dumpfile=scott.dmp

It’s probably possible to do it in one import, but the include syntax is horrible. I tried there

Oracle Database Critical Patch Update (CPU) Planning for 2016

With the start of the new year, it is now time to think about Oracle Critical Patch Updates for 2016.  Oracle releases security patches in the form of Critical Patch Updates (CPU) each quarter (January, April, July, and October).  These patches include important fixes for security vulnerabilities in the Oracle Database.  The CPUs are only available for certain versions of the Oracle Database, therefore, advanced planning is required to ensure supported versions are being used and potentially mitigating controls may be required when the CPUs can not be applied in a timely manner.

CPU Supported Database Versions

As of the October 2015 CPU, the only CPU supported database versions are 11.2.0.4, 12.1.0.1, and 12.1.0.2.  The final CPU for 12.1.0.1 will be July 2016.  11.2.0.4 will be supported until October 2020 and 12.1.0.2 will be supported until July 2021.

11.1.0.7 and 11.2.0.3 CPU support ended as of July 2015. 

Database CPU Recommendations
  1. When possible, all Oracle databases should be upgraded to 11.2.0.4 or 12.1.0.2.  This will ensure CPUs can be applied through at least October 2020.
     
  2. [12.1.01] New databases or application/database upgrade projects currently testing 12.1.0.1 should immediately look to implement 12.1.0.2 instead of 12.1.0.1, even if this will require additional effort or testing.  With the final CPU for 12.1.0.1 being July 2016, unless a project is implementing in January or February 2016, we believe it is imperative to move to 12.1.0.2 to ensure long-term CPU support.
     
  3. [11.2.0.3 and prior] If a database can not be upgraded, the only effective mitigating control for many database security vulnerabilities is to strictly limit direct database access.  In order to restrict database access, Integrigy recommends using valid node checking, Oracle Connection Manager, network restrictions and firewall rules, and/or terminal servers and bastion hosts.  Direct database access is required to exploit database security vulnerabilities and most often a valid database session is required.
     

Regardless if security patches are regularly applied or not, general database hardening such as changing database passwords, optimizing initialization parameters, and enabling auditing should be done for all Oracle databases. 

 

Tags: Oracle DatabaseOracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Oracle E-Business Suite Critical Patch Update (CPU) Planning for 2016

With the start of the new year, it is now time to think about Oracle Critical Patch Updates for 2016.  Oracle releases security patches in the form of Critical Patch Updates (CPU) each quarter (January, April, July, and October).  These patches include important fixes for security vulnerabilities in the Oracle E-Business Suite and its technology stack.  The CPUs are only available for certain versions of the Oracle E-Business Suite and Oracle Database, therefore, advanced planning is required to ensure supported versions are being used and potentially mitigating controls may be required when the CPUs can not be applied in a timely manner.

For 2016, CPUs for Oracle E-Business Suite will become a significant focus as a large number of security vulnerabilities for the Oracle E-Business Suite will be fixed.  The January 2016 CPU for the Oracle E-Business Suite (EBS) will include 78 security fixes for a wide range of security bugs with many being high risk such as SQL injection in web facing self-service modules.  Integrigy anticipates the next few quarters will have an above average number of EBS security fixes (average is 7 per CPU since 2005).  This large number of security bugs puts Oracle EBS environments at significant risk as many of these bugs will be high risk and well publicized.

Supported Oracle E-Business Suite Versions

Starting with the April 2016 CPU, only 12.1 and 12.2 will be fully supported for CPUs moving forward.  11.5.10 CPU patches for April 2016, July 2016, and October 2016 will only be available to customers with an Advanced Customer Support (ACS) contract.  There will be no 11.5.10 CPU patches after October 2016.  CPU support for 12.0 ended as of October 2015.

11.5.10 Recommendations
  1. When possible, the recommendation is to upgrade to12.1 or 12.2.
  2. Obtaining an Advanced Customer Support (ACS) contract is a short term (until October 2016) solution, but is an expensive option.
  3. An alternative to applying CPU patches is to use Integrigy's AppDefend, an application firewall for Oracle EBS, in proxy mode which blocks EBS web security vulnerabilities.  AppDefend provides virtual patching and can effectively replace patching of EBS web security vulnerabilities.

In order to mitigate some mod_plsql security vulnerabilities, all Oracle EBS 11i environments should look at limiting the enabled mod_plsql web pages.  The script $FND_TOP/patch/115/sql/txkDisableModPLSQL.sql can be used to limit the allowed pages listed in FND_ENABLED_PLSQL.  This script was introduced in 11i.ATG_PF.H and the most recent version is in 11i.ATG_PF.H.RUP7.  This must be thoroughly tested as it may block a few mod_plsql pages used by your organization.  Review the Apache web logs for the pattern '/pls/' to see what mod_plsql pages are actively being used.  This fix is included and implemented as part of the January 2016 CPU.

12.0 Recommendations
  1. As no security patches are available for 12.0, the recommendation is to upgrade to 12.1 or 12.2 when possible.
  2. If upgrading is not feasible, Integrigy's AppDefend, an application firewall for Oracle EBS, provides virtual patching for EBS web security vulnerabilities as well as blocks common web vulnerabilities such as SQL injection and cross-site scripting (XSS).  AppDefend is a simple to implement and cost-effective solution when upgrading EBS is not feasible.
12.1 Recommendations
  1. 12.1 is supported for CPUs through October 2019 for implementations where the minimum baseline is maintained.  The current minimum baseline is the 12.1.3 Application Technology Stack (R12.ATG_PF.B.delta.3).  This minimum baseline should remain consistent until October 2019, unless a large number of functional module specific (i.e., GL, AR, AP, etc.) security vulnerabilities are discovered.
  2. For organizations where applying CPU patches is not feasible within 30 days of release or Internet facing self-service modules (i.e., iSupplier, iStore, etc.) are used, AppDefend should be used to provide virtual patching of known, not yet patched web security vulnerabilities and to block common web security vulnerabilities such as SQL injection and cross-site scripting (XSS).
12.2 Recommendations
  1. 12.2 is supported for CPUs through July 2021 as there will be no extended support for 12.2.  The current minimum baseline is 12.2.3 plus roll-up patches R12.AD.C.Delta.7 and R12.TXK.C.Delta.7.  Integrigy anticipates the minimum baseline will creep up as new RUPs (12.2.x) are released for 12.2.  Your planning should anticipate the minimum baseline will be 12.2.4 in 2017 and 12.2.5 in 2019 with the releases of 12.2.6 and 12.2.7.  With the potential release of 12.3, a minimum baseline of 12.2.7 may be required in the future.
  2. For organizations where applying CPU patches is not feasible within 30 days of release or Internet facing self-service modules (i.e., iSupplier, iStore, etc.) are used, AppDefend should be used to provide virtual patching of known, not yet patched web security vulnerabilities and to block common web security vulnerabilities such as SQL injection and cross-site scripting (XSS).
EBS Database Recommendations
  1. As of the October 2015 CPU, the only CPU supported database versions are 11.2.0.4, 12.1.0.1, and 12.1.0.2.  11.1.0.7 and 11.2.0.3 CPU support ended as of July 2015.  The final CPU for 12.1.0.1 will be July 2016.
  2. When possible, all EBS environments should be upgraded to 11.2.0.4 or 12.1.0.2, which are supported for all EBS versions including 11.5.10.2.
  3. If database security patches (SPU or PSU) can not be applied in a timely manner, the only effective mitigating control is to strictly limit direct database access.  In order to restrict database access, Integrigy recommends using the EBS feature Managed SQLNet Access, Oracle Connection Manager, network restrictions and firewall rules, and/or terminal servers and bastion hosts.
  4. Regardless if security patches are regularly applied or not, general database hardening such as changing database passwords, optimizing initialization parameters, and enabling auditing should be done for all EBS databases.
Tags: Oracle E-Business SuiteOracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Recover from ORA-01172 & ORA-01151

DBASolved - Tue, 2016-01-19 07:48

This morning I was working on an Oracle Management Repository (OMR) for a test Enterprise Manager that is used by a few consultants I work with. When I logged into the box, I found that the OMR was down. When I went to start the database, I was greeted with ORA-01172 and ORA-01151.

These errors basically say:

ORA-01172 – recovery of thread % stuck at block % of file %
ORA-01151 – use media recovery to recover block, restore backup if needed

So how do I recover from this. The solution is simple, I just needed to perform the following steps:

1. Shutdown the database

SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.

2. Mount the database

SQL> startup mount;
ORACLE instance started.
Total System Global Area 1.0033E+10 bytes
Fixed Size 2934696 bytes
Variable Size 1677723736 bytes
Database Buffers 8321499136 bytes
Redo Buffers 30617600 bytes
Database mounted.

3. Recover the database

SQL> recover database;
Media recovery complete.

4. Open the database with “alter database”

SQL> alter database open;
Database altered.

At this point, you should be able to access the database (OMR) and then have the EM environment up and running.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Database
Categories: DBA Blogs

Using SKIP LOCKED feature in DB Adapter polling

Darwin IT - Tue, 2016-01-19 04:53
Last few days I spent with describing a Throttle mechanism using the DB Adapter. Today the 'Distributed Polling' functionality of the DB Adapter was mentioned to me, which uses the SKIP LOCKED clausule of the database.

On one of the pages you'll get to check the 'Distributed Polling' option:
Leave it like it is, since it adds the 'SKIP LOCKED' option in the 'FOR UPDATE' clausule.

In my example screendump I set the Database Rows per Transaction, but you might want to set in a sensible higher value with regards to the 'RowsPerPollingInterval' that you need to set yourself in the JCA file:

The 'RowsPerPollingInterval' is not an option in the UI, unfortunately. You might want to set this as a multiple to the MaxTransactionSize (in the UI denoted as 'Database Rows per Transaction').

A great explanation for this functionality is this A-Team blogpost. Unfortunately the link to the documentation about 'SKIP LOCKED' in that post is broken. I found this one. Nice thing is that it suggests using AQ as preferred solution in stead of SKIP LOCKED.

Maybe a better way for throttling is using the AQ Adapter together with the properties

It’s Called Data Analysis And Not Data Synthesis For A Reason

Michael Feldstein - Mon, 2016-01-18 18:31

By Phil HillMore Posts (383)

I’ve never been a big TEDtalks fan, but recently I’ve been exploring some of the episodes, partially based on peer pressure.

@PhilOnEdTech @mfeldstein67 y'all should do a weekly PTI style podcast rundown of the issues raised each week in edtech.

— Glenda Morgan (@morganmundum) January 15, 2016

In the process I ran across a talk from Sebastian Wernicke, who has a bioinformatics background but now seems to specialize in giving talks. The talk in question is “How to use data to make a hit TV show”, which starts by looking at two data approaches to binge TV production – Amazon’s use of data analysis to choose a new show concept, leading to Alpha House, and Netflix’s use of data to look at lots of show components but then to let humans make conclusions and “take a leap of faith”, leading to House of Cards. The anecdotes set up his description of where data fits and where it doesn’t, and this mirrors what Michael and I are seeing in the use the broad application of personalized learning.

We have described in our most recent EdSurge article:

Bottom Line: Personalized learning is not a product you can buy. It is a strategy that good teachers can implement.

While Wernicke is not addressing education, he describes the same underlying issue in memorable way (starting at 8:18 in particular).

Now, personally I’ve seen a lot of this struggle with data myself, because I work in computational genetics, which is also a field where lots of very smart people are using unimaginable amounts of data to make pretty serious decisions like deciding on a cancer therapy or developing a drug. And over the years, I’ve noticed a sort of pattern or kind of rule, if you will, about the difference between successful decision-making with data and unsuccessful decision-making, and I find this a pattern worth sharing, and it goes something like this.

So whenever you’re solving a complex problem, you’re doing essentially two things. The first one is, you take that problem apart into its bits and pieces so that you can deeply analyze those bits and pieces, and then of course you do the second part. You put all of these bits and pieces back together again to come to your conclusion. And sometimes you have to do it over again, but it’s always those two things: taking apart and putting back together again.

And now the crucial thing is that data and data analysis is only good for the first part. Data and data analysis, no matter how powerful, can only help you taking a problem apart and understanding its pieces. It’s not suited to put those pieces back together again and then to come to a conclusion. There’s another tool that can do that, and we all have it, and that tool is the brain. If there’s one thing a brain is good at, it’s taking bits and pieces back together again, even when you have incomplete information, and coming to a good conclusion, especially if it’s the brain of an expert.

And that’s why I believe that Netflix was so successful, because they used data and brains where they belong in the process. They use data to first understand lots of pieces about their audience that they otherwise wouldn’t have been able to understand at that depth, but then the decision to take all these bits and pieces and put them back together again and make a show like “House of Cards,” that was nowhere in the data. Ted Sarandos and his team made that decision to license that show, which also meant, by the way, that they were taking a pretty big personal risk with that decision. And Amazon, on the other hand, they did it the wrong way around. They used data all the way to drive their decision-making, first when they held their competition of TV ideas, then when they selected “Alpha House” to make as a show. Which of course was a very safe decision for them, because they could always point at the data, saying, “This is what the data tells us.” But it didn’t lead to the exceptional results that they were hoping for.

So data is of course a massively useful tool to make better decisions, but I believe that things go wrong when data is starting to drive those decisions. No matter how powerful, data is just a tool . . .

We are not the only people to describe this distinction. Tony Bates’ latest blog post describes a crossroads we face in automation vs. empowerment:

The key question we face is whether online learning should aim to replace teachers and instructors through automation, or whether technology should be used to empower not only teachers but also learners. Of course, the answer will always be a mix of both, but getting the balance right is critical.

What I particularly like about the Wernicke description is that he gets to the difference between analysis (detailed examination of the elements or structure of something, typically as a basis for discussion or interpretation) and synthesis (combination or composition, in particular)[1]. Data is uniquely suited to the former, the human mind is uniquely suited to the latter.

This is not to say that the use of data and analytics can never be used to put information back together, but it is crucial to understand there is a world of difference in data for analysis and data for synthesis. In the world of education, the difference shows up in whether data is used to empower learners and teachers or whether it is used to attempt automation of the learning experience.

  1. Using Google’s definitions.

The post It’s Called Data Analysis And Not Data Synthesis For A Reason appeared first on e-Literate.

CrossFit and Coding: 3 Lessons for Women and Technology

Usable Apps - Mon, 2016-01-18 17:21

Yes, it’s January again. Time to act on that New Year resolution and get into the gym to burn off those holiday excesses. But have you got what it takes to keep going back?

Here’s Sarahi Mireles (@sarahimireles), our User Experience Developer in Oracle’s México Development Center, to tell us about how her CrossFit experience not only challenges the myths about fierce workouts being something only for the guys but about what that lesson can teach us about coding and women in technology too…

Introducing CrossFit: Me Against Myself

Heard about CrossFit? In case you haven’t, it’s an intense fitness program with a mix of weights, cardio, other exercises, and a lot of social media action too about how much we love doing CrossFit.

CrossFit is also a great way to keep fit and to make new friends. Most workouts are so tough that you’re left all covered in sweat, your muscles are on fire, and you feel like it's going to be impossible to even move the next day.

But you keep doing it anyway. 

One of the things I love most about CrossFit is that it is super dynamic. The Workout of the Day (WOD) is a combination of activities, from running outside, gymnastics, weight training, to swimming. You’re never doing the same thing two days in a row. 

Sounds awesome, right? Well, it is!

But some people, particularly women, unfortunately think CrossFit will make them bulk up and they’ll end up with HUGE muscles! A lot of people on the Internet are saying this, and lots of my friends believe it too: CrossFit is really for men and not women. 

From CrossFit to CrossWIT: Women in Techology (WIT)

Just like with CrossFit, there are many young women who also believe that coding is something meant only for men. Seems crazy, but let's be honest, hiring a woman who knows how to code can be a major challenge (my manager can tell you about that!).

So, why aren't women interested in either coding or lifting weights? Or are they? Is popular opinion the truth, that there are some things that women shouldn't do rather than cannot do?

The reality is that CrossFit won't make you bulk up like a bodybuilder, any more than studying those science, technology, engineering or mathematics (STEM) subjects in school won’t make you any less feminine. Women have been getting the wrong messages about gender and technology from the media and from advertising since we were little girls. We grew up believing that intense workout programs, just like learning computer languages, and about engineering, science and math, are “man’s stuff”. And then we wonder where are the women in technology?!

3 Lessons to Challenge Conventions and Change Yourself

So, wether you are interested in these things, or not, I would like to point out 3 key lessons, based on my experience, that I am sure would help you in some stage of your life: 

  1. Don't be afraid of defying those gender stereotypes. You can become whatever you want to be: a successful doctor, a great programmer, or even a CrossFit professional. Go for it!

  2. Choosing to be or to do something different from what others consider “normal” can be hard, but keep doing it! There are talented women in many fields of work who, despite the stereotypes, are awesome professionals, are respected for what they do, and have become key parts of their organizations and companies. Coding is a world largely dominated by men now, with 70% of the jobs taken by males, but that does not stop us from challenging and changing things so that diversity makes the tech industry a better place for everyone

  3. If you are interested in coding, computer science, or technology in general, keep up with your passion by learning more from others by reading the latest tech blogs, for example. If you don't know where to start, here are some great examples to inspire you: our own VoX, Usable Apps, and AppsLab blogs. Read up about the Oracle Women in Technology (WIT) program too.

I'm sure you'll find something of interest in the work Oracle does and you can use our resources to pursue your interests in a career in technology! And who knows? Maybe you can join us at an Oracle Applications User Experience event in the future. We would love to see you there and meet you in person.

I think you will like what you can become! Just like the gym, don’t wait until next January to start.

Related Links

Seven Days with the Xiaomi Mi Band: A Model of Simple Wearable Tech UX for Business

Oracle AppsLab - Mon, 2016-01-18 02:47

Worn Out With Wearables

That well-worn maxim about keeping it simple, stupid (KISS) now applies as much to wearable tech (see what I did there?) user experience as it does to mobile or web apps.

The challenge is to keep on keeping “it” simple as product managers and nervous C-types push for more bells and whistles in a wearable tech market going ballistic. Simplicity is a relative term in the fast changing world of technology. Thankfully, the Xiaomi Mi Band has been kept simple and the UX relates to me.

mi_with_apple_color

The Mi Band worn alongside Apple Watch (42mm version) for size.

I first heard about the Mi Band with a heads-up from OAUX AppsLab chief Jake Kuramoto (@jkuramot) last summer. It took me nearly six months to figure out a way to order this Chinese device in Europe: When it turned in up Amazon UK.

topper

We both heard about the Mi from longtime Friend of the ‘Lab, Matt Topper (@topperge).

I’ve become jaded with the current deluge of wearable tech and the BS washing over it. Trying to make sense of wearable tech now makes my head hurt. The world and its mother are doing smartwatches and fitness trackers. Smartglasses are coming back. Add the wellness belts, selfie translators that can get you a date or get you arrested, and ingestibles into the mix; well it’s all too much to digest. There are signals the market is becoming tired too, as the launch of the Fitbit Blaze may indicate.

 Mi Band (All app images are on iOS)

On a winning streak: Mi Band (All app images are on iOS)

But after 7 days of wearing the Mi Band, I have to say: I like it.

Mi User Experience Es Tu User Experience

My Mi Band came in a neat little box, complete with Chinese language instructions.

 A big UX emerges.

Inside the little box: A big UX emerges.

Setup was straightforward. I figured out that the QR code in the little booklet was my gateway to installing the parent App (iOS and Android are supported) on my iPhone and creating an account. Account verification requires an SMS text code to be sent and entered. This made me wonder where my data was stored and its security. Whatever.

I entered the typical body data to get the Mi Band setup for recording my activity (by way of steps) and sleep automatically, reporting progress on the mobile app or by glance at the LEDs on the sensor (itself somewhat underwhelming in appearance. This ain’t no Swarovski Misfit Shine).

bmi

Enter your personal data. Be honest.

Metric, Imperial, and Jin locale units are supported.

Metric, Imperial, and Jin locale units are supported.

I charged up the sensor using yet another unique USB cable to add to my ever-growing pile of Kabelsalat, slipped the sensor into the little bracelet (black only, boo!), and began the tracking of step, sleep and weight progress (the latter requires the user to enter data manually).

steps_1 steps_2 sleep_1 sleep_2 weight_1 weight_2

I was impressed by simplicity of operation that was balanced by attention to detail and a friendly style of UX. The range of locale settings, the quality of the visualizations, and the very tone of the communications (telling me I was on a “streak”) was something I did not expect from a Chinese device. But then Xiaomi is one of the world’s biggest wearable tech players, so shame on me, I guess.

fitbit_v_mi_2 AppleHealth

The data recorded seemed to be fairly accurate. The step count seemed to be a little high for my kind of exertion and my sleep stats seemed reasonable. The Mi Band is not for the 100 miles-a-week runners like me or serious quantified self types who will stick with Garmin, Suunto, Basis, and good old Microsoft Excel.

fitbit_v_mi_1 fitbit_v_mi_2

For a more in-depth view of my activity stats, I connected the Mi Band to Apple Health and liked what I saw on my iPhone (Google Fit is also supported). And of course, the Mi Band app is now enabled for social. You can share those bragging rights like the rest of them.

But, you guessed it. I hated the color of the wristband. Only black was available, despite Xiaomi illustrations showing other colors. WTF? I retaliated by ordering a Hello Kitty version from a third party.

The Mi Band seems ideal for the casual to committed fitness type and budding gym bunnies embarking on New Year resolutions to improve their fitness and need the encouragement to keep going. At a cost of about 15 US dollars, the Mi Band takes some beating. Its most easily compared with the Fitbit Flex, and that costs a lot more.

Beyond Getting Up To Your Own Devices

I continue to enjoy the simple, glanceable UX and reporting of my Mi Band. It seems to me that its low price is hinting at an emergent business model that is tailor-made for the cloud: Make the devices cheap or even free, and use the data in the cloud for whatever personal or enterprise objectives are needed. That leaves the fanatics and fanbois to their more expensive and complex choices and to, well, get up to their own devices.

So, for most, keeping things simple wins out again. But the question remains: how can tech players keep on keeping it simple?

Mi Band Review at a Glance

Likes

  • Simplicity
  • Price
  • Crafted, personal UX
  • Mobile app visualizations and Apple and Google integration

Dislikes

  • Lack of colored bands
  • Personal data security
  • Unique USB charging cable
  • Underwhelming #fashtech experience

Your thoughts are welcome in the comments.Possibly Related Posts:

Video: Oracle Linux Virtual Machine (VM) on Micorosft Azure

Tim Hall - Mon, 2016-01-18 02:17

The interface for Microsoft Azure has been re-jigged since I last did screen shots, so I did a run through of creating an Oracle Linux VM and recorded it for my channel.

I also updated the associated article.

Cheers

Tim…

Video: Oracle Linux Virtual Machine (VM) on Micorosft Azure was first posted on January 18, 2016 at 9:17 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Drop Column

Jonathan Lewis - Mon, 2016-01-18 02:14

I published a note on AllthingsOracle a few days ago discussing the options for dropping a column from an existing table. In a little teaser to a future article I pointed out that dropping columns DOESN’T reclaim space; or rather, probably doesn’t, and even if it did you probably won’t like the way it does it.

I will  be writing about “massive deletes” for AllthingsOracle in the near future, but I thought I’d expand on the comment about not reclaiming space straight away. The key point is this – when you drop a column you are probably dropping a small fraction of each row. (Obviously there are some extreme variants on the idea – for example, you might have decided to move a large varchar2() to a separate table with shared primary key).

If you’ve dropped a small fraction of each row you’ve freed up a small fraction of each block, which probably means the block hasn’t been identified as having available free space for inserts. In many cases this is probably  a good thing – because it’s quite likely the if every block in your table is suddenly labelled as having sufficient free space for new row then you could end up with a difficult and ongoing performance problem.

Many large tables have a “time-based” component to their usage – as time passes the most recently entered rows are the ones that get most usage, and older rows are no longer accessed; this means you get a performance benefit from caching because the most useful fractions of such tables are often well cached and the “interesting” data is fairly well clustered.

In a case like this, imagine what will happen if EVERY block in your table suddenly acquires enough free space to accept a couple of new rows – over the next few days the incoming data will be spread across the entire length of the table, and for the next couple of months, or years, you will have to keep the entire table cached in memory if the performance is to stay constant; moreover the clustering_factor of the most useful indexes is likely to jump from “quite small” to “absolutely massive”, and the optimizer will start changing lots of plans because it will decide that your favourite indexes are probably much to expensive to user.

I am, of course, painting a very grim picture – but it is a possible scenario that should be considered before you drop a column from a table. Combined with my observations about the locking and overheads of dropping a column you might (probably ought to) decide that you should never drop a column you should only mark it as unused or (better still if you’re on 12c) mark it invisible for a while before marking it unused. You can worry about space reclamation at a later date when you considered all the ramifications of how it might impact on performance.

Footnote: If you’re still using freelist management then dropping a column won’t put a block on the freelist until the total used space in the block falls below the value dictated by pctused (default 40%); if you’re using ASSM then the block doesn’t become available for reuse until (by default) the free space exceeds 25% of the block’s usable space.

 

 


APEX Dashboard Competition

Denes Kubicek - Sun, 2016-01-17 15:01
APEX Dashboard Competition initiated by Tobias Arnhold is now online. If you want to compete against your colleagues all you need to do is to create a nice looking dashboard based on the prepared set of data, crate a packaged application and send it to the jury. You can apply here: Submit your application and win some nice prices. Hurry up. The closing is on Friday the 1st of April 2016.

Categories: Development

DML Operations On Partitioned Tables Can Restart On Invalidation

Randolf Geist - Sun, 2016-01-17 13:12
It's probably not that well known that Oracle can actually rollback / re-start the execution of a DML statement should the cursor become invalidated. By rollback / re-start I mean that Oracle actually performs a statement level rollback (so any modification already performed by that statement until that point gets rolled back), performs another optimization phase of the statement on re-start (due to the invalidation) and begins the execution of the statement from scratch. Note that this can happen multiple times - actually it's possible to end up in a kind of infinite loop when this happens, leading to statements that can run for very, very long (I've seen statements on Production environments executing for several days although a single execution would only take minutes).

The pre-requisites to meet for this to happen are not that complex or exotic:

- The target table to manipulate needs to be partitioned

- The cursor currently executing gets invalidated - either by running DDL (typically think of partition related operations) - or simply by gathering statistics on one of the objects involved in the statement

- The DML statement hasn't touched yet one of the partitions of the target table but attempts to do so after the cursor got invalidated

When the last condition is met, the statement performs a rollback, and since it got invalidated - which is one of the conditions to be met - another optimization phase happens, meaning that it's also possible to get different execution plans for the different execution attempts. When the execution plan is ready the execution begins from scratch.

According to my tests the issue described here applies to both conventional and direct-path inserts, merge statements (insert / update / delete) as well as serial and parallel execution. I haven't explicitly tested UPDATE and DELETE statements, but the assumption is that they are affected, too.

The behaviour is documented in the following note on MOS: "Insert Statement On Partitioned Tables Is RE-Started After Invalidation (Doc ID 1462003.1)" which links to Bug "14102209 : INSERT STATEMENT' IS RESTARTING BY ITSELF AFTER INVALIDATION" where you can also find some more comments on this behaviour. The issue seems to be that Oracle at that point is no longer sure if the partition information compiled into the cursor for the partitioned target table is still correct or not (and internally raises and catches a corresponding error, like "ORA-14403: Cursor invalidation detected after getting DML partition lock", leading to the re-try), so it needs to refresh that information, hence the re-optimization and re-start of the cursor.

Note that this also means that the DML statement might already have performed modifications to other partitions but after being invalidated attempts to modify another partition it hasn't touched yet - it just needs an attempt to modify a partition not touched into yet by that statement.

It's also kind of nasty that the statement keeps running the potentially lengthy query part after being invalidated only to find out it needs to re-start after the first row is attempted to be applied to a target table partition not touched yet.

Note that applications typically run into this problem, when they behave like the following:

- There are longer running DML statements that take typically several seconds / minutes until they attempt to actually perform an modification to a partitioned target table

- They either use DBMS_STATS to gather stats on one of the involved tables, typically using NO_INVALIDATE=>FALSE, which leads to an immediate invalidation of all affected cursors

- And/Or they perform partition related operations on one of the tables involved, like truncating, creating or exchanging partitions. Note that it is important to point out that it doesn't matter which objects gets DDL / stats applied, so it's not limited to activity on the partitioned target table being modified - any object involved in the query can cause the cursor invalidation

In principle this is another variation of the general theme "Don't mix concurrent DDL with DML/queries on the same objects". Doing so is something that leads to all kinds of side effects, and the way the Oracle engine is designed means that it doesn't cope very well with doing so.

Here is a simple test case for reproducing the issue, using INSERTs in this case here (either via INSERT or MERGE statement):

create table t_target (
id number(*, 0) not null,
pkey number(*, 0) not null,
filler varchar2(500)
)
--segment creation immediate
partition by range (pkey) --interval (1)
(
partition pkey_0 values less than (1)
, partition pkey_1 values less than (2)
, partition pkey_2 values less than (3)
, partition pkey_3 values less than (4)
);

create table t_source
compress
as
select 1 as id, rpad('x', 100) as filler
from
(select /*+ cardinality(1e3) */ null from dual connect by level <= 1e3),
(select /*+ cardinality(1e0) */ null from dual connect by level <= 1e0)
union all
select 1 as id, rpad('y', 100) as filler from dual;

-- Run this again once the DML statement below got started
exec dbms_stats.gather_table_stats(null, 't_source', no_invalidate=>false)

exec dbms_stats.gather_table_stats(null, 't_target', no_invalidate=>false)

----------------------------------------------------------------------------------------------------------------------------------
-- INSERT example --
-- Run above DBMS_STATS calls or any other command that invalidates the cursor during execution to force re-start of the cursor --
----------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

-- alter session set tracefile_identifier = 'insert_restart';

-- alter session set events '10046 trace name context forever, level 12';

-- exec sys.dbms_monitor.session_trace_enable(waits => true, binds => true/*, plan_stat => 'all_executions'*/)

insert /* append */ into t_target (id, pkey, filler)
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 1 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 2 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 3 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
;

-- exec sys.dbms_monitor.session_trace_disable

----------------------------------------------------------------------------------------------------------------------------------
-- MERGE example --
-- Run above DBMS_STATS calls or any other command that invalidates the cursor during execution to force re-start of the cursor --
----------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

merge /* append */ into t_target t
using (
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 1 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 2 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 3 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
) s
on (s.id = t.id)
when not matched then
insert (id, pkey, filler) values (s.id, s.pkey, s.filler)
;
The idea of the test case is to maximise the time until each UNION ALL branch produces data to insert by performing an inefficient HASH JOIN (that in fact generates a Cartesian product and needs to apply a costly REGEXP filter on that huge intermediate result) and forcing a sort on the join result, so rows will only be handed over to the parent operations until all rows were processed in the join operation - and each branch generates data for a different partition of the target table. Typically it should take several seconds per branch to execute (if you need more time just un-comment the additional REGEXP_REPLACE filters), so you should have plenty of time to cause the invalidation from another session.

This means during the execution of each branch invalidating the cursor (for example by executing either of the two DBMS_STATS calls on the source or target table using NO_INVALIDATE=>FALSE) will lead to a re-start of the statement at the next attempt to write into a new target partition, possibly rolling back rows already inserted into other partitions.

Diagnostics
If you run the provided INSERT or MERGE statement on newer versions of Oracle that include the SQL_EXEC_START and SQL_EXEC_ID in V$ACTIVE_SESSION_HISTORY (or V$SESSION for that matter) and invalidate the cursor during execution and before a partition of the target table gets inserted for the first time then you can see that these entries change as the statement restarts.

In such cases the INVALIDATIONS and LOADS increase in V$SQL accordingly and the OBJECT_STATUS changes from INVALID_UNAUTH to VALID again with each re-start attempt. In newer versions where you can configure the "plan_stat" information for SQL trace to "all_executions" you'll find STAT lines for each execution attempt dumped to the trace file, but only a single final EXEC line, where the elapsed time covers all execution attempts.

The oldest version I've tested was 10.2.0.4, and that one showed already the re-start behaviour, although I would be inclined to think that this wasn't the case with older versions. So if anybody still runs older versions than 10.2.0.4 I would be interested to hear whether the behaviour reproduces or not.

Oracle Access Manager (OAM 11g) Interview Questions.. Do you know enough about OAM ???

Online Apps DBA - Sat, 2016-01-16 19:47
This entry is part 3 of 5 in the series Oracle Access Manager

Oracle Access Manager (OAM) is Oracle’s recommended Single Sign-On (SSO) solution that not only provides heterogeneous platform support but is also Integrated with Oracle Fusion Applications . OAM is also recommended SSO solution for Oracle Fusion MiddleWare (Webcenter, OBIEE , SOA) and Oracle E-Business Suite, Peoplesoft  or Siebel CRM.

Lot of time in our Oracle Access Manager Training(next batch starts on 31st January, 2016 – Register now and get discount of 200 USD, Apply coupon code A2OFF) trainees ask about interview questions related to Oracle Identity and Access Management Suite.

These questions also help them to understand concepts and important things helpful in actual implementation of OAM

Can you answer these basic questions related to Oracle Identity & Access Manager ? (leave answers under comments to see how many you get right)

1) What is the name of main OAM Configuration File and where it is located (DB or File System)?

2) What is the name of main Weblogic Domain Configuration File and where it is located?

3) What is the location of OAM Admin Server logs?

4) What is the Oracle Instance in OID ?

5) Where the start/stop script for Admin and Managed servers are located?

6) What is difference between OPEN, SIMPLE, CERT communication mode in WebGate to OAM communication ?

7) What is Proxy Port in OAM Server and which component of OAM connect to Proxy Port of OAM ?

Click on the button below and get the Cheat Sheet on Oracle Access Manager including the answers for above questions.

The cheat sheet also contains the Basic information on Oracle Internet Directory (OID).

If you want to learn more or wish to discuss challenges you are hitting in Oracle Access Manager Implementation or OAM Integration with Oracle E-Business Suite (R12.1/12.2), register for our Oracle Access Manager Training (next batch starts on 31st January, 2016 – Register now and get discount of 200 USD,  Apply coupon code A2OFF, Discounts won’t stay longer and prices will go up soon).

We are so confident on quality and value of our training that We provide 100% Money back guarantee so in unlikely case of you being not happy after 2 sessions, just drop us a mail before third session and We’ll refund FULL money.

We provide dedicated machine on cloud to practice OAM Implementation including integration with E-Business Suite and recording of live interactive trainings for life time access.

 

Click Here to Subscribe with us to get OAM-OID Cheat Sheet

Stay tuned for more Interview Questions on OAM-EBS 12.2/12.1 Integration in our next post.

The post Oracle Access Manager (OAM 11g) Interview Questions.. Do you know enough about OAM ??? appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Automatic ADF Logout on Browser Close with WebSocket

Andrejus Baranovski - Sat, 2016-01-16 12:15
Every ADF project could have a requirement to handle browser close event effectively. Differently than desktop applications where we could handle such events, browser doesn't send any event to the server, when browser page is closed. This is especially important for transactional data, when user locks data row and lock must be released automatically, in case if user is closing browser without unlocking. Besides transactional data management, it is important for performance improvement - Web session will be closed instantly and WebLogic resources will be released. There was no reliable solution to handle this, now we can do it with WebLogic 12c and WebSockets. WebLogic 12c supports WebSockets natively, there is no need to configure anything or add libraries.

When WebSocket channel is closed, it delivers event to the server on browser close. We could use this event to release locked resources, we need to logout ADF session to force ROLLBACK. Sample application implements HTTP client invoked by WebSocket server endpoint. This HTTP client simulates browser activity and redirects to adfAuthentication with logout request, using original JSESSION ID from closed browser session. This works also during accidental client computer power off. Download sample application where I have described all important implementation steps - ADFSessionHandlingWebSocket.zip.

Sample application is protected with ADF Security, you can login with redsam/welcome1 user. It includes Oracle JET libraries, one of the dashboard tiles implements JET fragment. JET is not required for the sample to work, it is used only to implement dashboard UI.

You can observe from browser log when WebSocket connection is opened. Connection is established on initial page load, immediatelly after login. This opens WebSocket communication chanel, as soon as this chanel will be closed - WebSocket server endpoint will force logout for ADF session from closed browser:


As soon as WebSocket channel is established, ADF web session ID (JSESSIONID is retrieved from the cookie) is sent to WebSocket server endpoint. Sample logs ID of ADF web session:


I'm going to lock one of the records, I'm invoking ADF BC lock method (DB pool is disabled, this will keep DB connection assigned for AM):


Lock action for row data is visible in the log, SELECT FOR UPDATE is executed for ID 102:


Let's close a browser now without issuing ROLLBACK, invoke Quit action in the browser:


WebSocket channel will be closed and this will trigger ADF Authentication servlet request to logout from ADF web session. As logout happens, ADF resources are released, ADF BC is triggering ROLLBACK in DB:


Session is closed based on JSESSIONID. With HTTP Client we simulate user logout after browser was closed:


Now we should overview technical implementation. WebSocket channel is opened from JavaScript:


This happens on page load, with clientListener triggering connectSocket method:


Method connectSocket in JavaScript is using standard WebSocket API to open connection:


WebSocket server endpoint is defined with custom configurator. Through configurator we can reference HTTP session initiated in ADF. HTTP client in WebSocket endpoint will use it later, to simulate ADF logout:


ADF HTTP session is referenced through WebSocket configurator:


Helper session listener class is defined in web.xml, here I'm copying JSESSIONID from the cookie into session attribute (to be able to reference JSESSIONID in WebSocket endpoint):


OnClose method in WebSocket endpoint is invoked, when connection channel between client and server is closed (browser is closed). Here I'm invoking custom method handleLogout method:


I'm constructing HTTP client request with the same JSESSIONID value as it was copied from ADF session. HTTP Get is executed against ADF Authentication servlet with logout request using JSESSIONID value. This is forcing ADF session to logout from simulated HTTP session by HTTP client:


Validate DG Broker Config for Switchover

Michael Dinh - Sat, 2016-01-16 12:08

Primary and Standby databases are running on the same server using OMF with listening on port 1530/1531

Note I have – TraceLevel = ‘SUPPORT’

+++ Check listener for DGMGRL service from PRIMARY and STANDBY.
oracle@arrow:hawksan:/media/sf_working/dataguard
$ lsnrctl status listener_las|grep DG -A 1
Service "hawklas_DGB" has 1 instance(s).
  Instance "hawklas", status READY, has 1 handler(s) for this service...
Service "hawklas_DGMGRL" has 1 instance(s).
  Instance "hawklas", status UNKNOWN, has 1 handler(s) for this service...

oracle@arrow:hawksan:/media/sf_working/dataguard
$ lsnrctl status listener_san|grep DG -A 1
Service "hawksan_DGB" has 1 instance(s).
  Instance "hawksan", status READY, has 1 handler(s) for this service...
Service "hawksan_DGMGRL" has 1 instance(s).
  Instance "hawksan", status UNKNOWN, has 1 handler(s) for this service...

Get into habit of using instance versus database where applicable for RAC compatibility.

DGMGRL> show database hawklas

Database - hawklas

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    hawklas

Database Status:
SUCCESS

DGMGRL> show database hawklas DGConnectIdentifier
  DGConnectIdentifier = 'hawklas'
DGMGRL> show instance hawklas DGConnectIdentifier
  DGConnectIdentifier = 'hawklas'
DGMGRL>
+++ Check DG Configuration
oracle@arrow:hawklas:/media/sf_working/dataguard
$ ./check_dg.sh
***** Checking Data Guard Broker Configuration ....
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /
Connected.
DGMGRL> show configuration verbose

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawklas - Primary database
    hawksan - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'
    ExternalDestination1            = ''
    ExternalDestination2            = ''
    PrimaryLostWriteAction          = 'CONTINUE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show configuration TraceLevel
  TraceLevel = 'SUPPORT'
DGMGRL> show database hawklas

Database - hawklas

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    hawklas

Database Status:
SUCCESS

DGMGRL> show database hawksan

Database - hawksan

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 0 seconds ago)
  Apply Lag:       0 seconds (computed 0 seconds ago)
  Apply Rate:      45.00 KByte/s
  Real Time Query: ON
  Instance(s):
    hawksan

Database Status:
SUCCESS

DGMGRL> show instance hawklas DGConnectIdentifier
  DGConnectIdentifier = 'hawklas'
DGMGRL> show instance hawksan DGConnectIdentifier
  DGConnectIdentifier = 'hawksan'
DGMGRL> show instance hawklas StaticConnectIdentifier
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=arrow)(PORT=1530))(CONNECT_DATA=(SERVICE_NAME=hawklas_DGMGRL)(INSTANCE_NAME=hawklas)(SERVER=DEDICATED)))'
DGMGRL> show instance hawksan StaticConnectIdentifier
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=arrow)(PORT=1531))(CONNECT_DATA=(SERVICE_NAME=hawksan_DGMGRL)(INSTANCE_NAME=hawksan)(SERVER=DEDICATED)))'
DGMGRL> show instance hawklas InconsistentProperties
INCONSISTENT PROPERTIES
   INSTANCE_NAME        PROPERTY_NAME         MEMORY_VALUE         SPFILE_VALUE         BROKER_VALUE

DGMGRL> show instance hawksan InconsistentProperties
INCONSISTENT PROPERTIES
   INSTANCE_NAME        PROPERTY_NAME         MEMORY_VALUE         SPFILE_VALUE         BROKER_VALUE

DGMGRL> show instance hawklas LogArchiveMaxProcesses
  LogArchiveMaxProcesses = '4'
DGMGRL> show instance hawksan LogArchiveMaxProcesses
  LogArchiveMaxProcesses = '4'
DGMGRL> show instance hawklas DelayMins
  DelayMins = '0'
DGMGRL> show instance hawksan DelayMins
  DelayMins = '0'
DGMGRL> show instance hawklas LogArchiveTrace
  LogArchiveTrace = '0'
DGMGRL> show instance hawksan LogArchiveTrace
  LogArchiveTrace = '0'
DGMGRL> show instance hawklas statusreport
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT

DGMGRL> show instance hawksan statusreport
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT

DGMGRL> exit
oracle@arrow:hawklas:/media/sf_working/dataguard
$
+++ Test connectivity to database using StaticConnectIdentifier from DG Broker
oracle@arrow:hawksan:/media/sf_working/dataguard
$ sqlplus /nolog

SQL*Plus: Release 11.2.0.4.0 Production on Sat Jan 16 08:10:31 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

@> connect sys/oracle@'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=arrow)(PORT=1530))(CONNECT_DATA=(SERVICE_NAME=hawklas_DGMGRL)(INSTANCE_NAME=hawklas)(SERVER=DEDICATED)))' as sysdba
Connected.
ARROW:(SYS@hawklas):PRIMARY> show parameter name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cell_offloadgroup_name               string
db_file_name_convert                 string
db_name                              string      hawk
db_unique_name                       string      hawklas
global_names                         boolean     FALSE
instance_name                        string      hawklas
lock_name_space                      string
log_file_name_convert                string
processor_group_name                 string
service_names                        string      hawk,hawklas
ARROW:(SYS@hawklas):PRIMARY> select open_mode, database_role from v$database;

OPEN_MODE            DATABASE_ROLE
-------------------- ----------------
READ WRITE           PRIMARY

ARROW:(SYS@hawklas):PRIMARY> connect sys/oracle@'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=arrow)(PORT=1531))(CONNECT_DATA=(SERVICE_NAME=hawksan_DGMGRL)(INSTANCE_NAME=hawksan)(SERVER=DEDICATED)))' as sysdba
Connected.
ARROW:(SYS@hawksan):PHYSICAL STANDBY> show parameter name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cell_offloadgroup_name               string
db_file_name_convert                 string
db_name                              string      hawk
db_unique_name                       string      hawksan
global_names                         boolean     FALSE
instance_name                        string      hawksan
lock_name_space                      string
log_file_name_convert                string
processor_group_name                 string
service_names                        string      hawk,hawksan
ARROW:(SYS@hawksan):PHYSICAL STANDBY> select open_mode, database_role from v$database;

OPEN_MODE            DATABASE_ROLE
-------------------- ----------------
READ ONLY WITH APPLY PHYSICAL STANDBY

ARROW:(SYS@hawksan):PHYSICAL STANDBY>
+++ switchover to hawksan (STANDBY)

Must connect as sys@tns (typically same as DGConnectIdentifier) for switchover.

oracle@arrow:hawklas:/media/sf_working/dataguard
$ ./clearlog.sh
oracle@arrow:hawklas:/media/sf_working/dataguard
$ dgmgrl
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/oracle@hawklas
Connected.
DGMGRL> show configuration

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawklas - Primary database
    hawksan - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database hawklas

Database - hawklas

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    hawklas

Database Status:
SUCCESS

DGMGRL> show database hawksan

Database - hawksan

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       0 seconds (computed 1 second ago)
  Apply Rate:      19.00 KByte/s
  Real Time Query: ON
  Instance(s):
    hawksan

Database Status:
SUCCESS

DGMGRL> switchover to hawksan
Performing switchover NOW, please wait...
Operation requires a connection to instance "hawksan" on database "hawksan"
Connecting to instance "hawksan"...
Connected.
New primary database "hawksan" is opening...
Operation requires startup of instance "hawklas" on database "hawklas"
Starting instance "hawklas"...
ORACLE instance started.
Database mounted.
Database opened.
Switchover succeeded, new primary is "hawksan"
DGMGRL> show configuration

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawksan - Primary database
    hawklas - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database hawksan

Database - hawksan

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    hawksan

Database Status:
SUCCESS

DGMGRL> show database hawklas

Database - hawklas

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       0 seconds (computed 1 second ago)
  Apply Rate:      0 Byte/s
  Real Time Query: ON
  Instance(s):
    hawklas

Database Status:
SUCCESS

DGMGRL> exit
++++ Save logs for reference
oracle@arrow:hawklas:/media/sf_working/dataguard
$ ./savelog.sh
`/u01/app/oracle/product/11.2.0/dbhome_1/network/log/listener_las.log' -> `/tmp/listener_las.log'
`/u01/app/oracle/product/11.2.0/dbhome_1/network/log/listener_san.log' -> `/tmp/listener_san.log'
`/u01/app/oracle/diag/rdbms/hawklas/hawklas/trace/alert_hawklas.log' -> `/tmp/alert_hawklas.log'
`/u01/app/oracle/diag/rdbms/hawklas/hawklas/trace/drchawklas.log' -> `/tmp/drchawklas.log'
`/u01/app/oracle/diag/rdbms/hawksan/hawksan/trace/alert_hawksan.log' -> `/tmp/alert_hawksan.log'
`/u01/app/oracle/diag/rdbms/hawksan/hawksan/trace/drchawksan.log' -> `/tmp/drchawksan.log'
+++ switchover to hawklas (STANDBY)

Must connect as sys@tns (typically same as DGConnectIdentifier) for switchover.

oracle@arrow:hawklas:/media/sf_working/dataguard
$ dgmgrl
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/oracle@hawklas
Connected.
DGMGRL> show configuration

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawksan - Primary database
    hawklas - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> switchover to hawklas
Performing switchover NOW, please wait...
New primary database "hawklas" is opening...
Operation requires startup of instance "hawksan" on database "hawksan"
Starting instance "hawksan"...
ORACLE instance started.
Database mounted.
Database opened.
Switchover succeeded, new primary is "hawklas"
DGMGRL> show configuration

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawklas - Primary database
    hawksan - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database hawklas

Database - hawklas

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    hawklas

Database Status:
SUCCESS

DGMGRL> show database hawksan

Database - hawksan

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       0 seconds (computed 1 second ago)
  Apply Rate:      0 Byte/s
  Real Time Query: ON
  Instance(s):
    hawksan

Database Status:
SUCCESS

DGMGRL> exit
oracle@arrow:hawklas:/media/sf_working/dataguard
$

Defining Resources in #GoldenGate Studio 12c

DBASolved - Fri, 2016-01-15 16:12

As I’ve been working with the beta of GoldenGate Studio 12c, I have tried to do simple things to see what will break and what is needed to make the process work. One of the things that I lilke about the studio is that prior to creating any solutions, mappings or projects, you can define what databases and GoldenGate instances will be used during the design process. What I want to show you in this blog post is how to create the database resource and the GoldenGate instance resource.

Creating a Resource:

To create a database resource, after opening GoldenGate Studio, go to the Resource tab. On this tab, you will see that is it empty. This is because no resources have been created yet.

In the left hand corner of the Resources tab, you should see a folder with a small arrow next to it. When you click on the arrow, you are provided with a context menu that provides you with three options for resources (Databases, Global Mappings, and GoldenGate Instances).


Database Resources:

Now that you know how to select what resrouce you want to create, lets create a database resource. To do this, select the database resource from the context menu. This will open up a one page wizard/dialog for you to fill out the connection information for the database you want to use as a resource.

You will notice there are a few fields that need to be populated. Provide the relative information you need to connect to the database. Once you all the information has been provided, you can test the connection to validate that it works before clicking ok.

Once you click ok, the database resource will be added to the resrouce tab under database header.

Notice that the database is automatically connected to once it is created. This allows you to immediately start using the resource for mappings and global mappings.

GoldenGate Instance Resources:

The GoldenGate Instance resources are a little more complex to configure. This is due to the requirement that the GoldenGate environment has to have the GoldenGate Monitoring Agent (aka. JAgent (12.1.3.0)) running. This is the same JAgent that is used with the OEM plug-in. If you need more information on how to install and configure the JAgent, you can find it at this here.

Now, to create a new GoldenGate Instance resource, you follow the same approach as you would to create a database resource; instead of selecting database; select GoldenGate Instance. This will open up the GoldenGate Instance wizard/dialog for you to fill out. Provide all the information requested.

In setting up the GoldenGate Instance, there are a few things that you need to provide. In my opinion, the names of the items requested in the GoldenGate Information section are misleading. To make this a bit easier, I’m providing an explanation of what each field means.

GoldenGate Version: This is the version of GoldenGate running with the JAgent
GoldenGate Database Type: Database which GoldenGate is running against. There are multiple opptions here
GoldenGate Port: This is the port number of the manager process
Agent Username: This is the username that is defined in $GGAGENT_HOME/cfg/Config.properties
Agent Password: This is the password that is created and stored in the datastore for the JAgent
Agent Port: This is the JMX port number that is defined in $GGAGENT_HOME/cfg/Config.properties

After providing all the required information, you can then perform a test connection. If the connection is successful, then you can click “ok” to create the GoldenGate Instance resource. If the connection fails, then you need to confirm all your settings.

Once all the resources you need for designing your GoldenGate architecture is done, you will see all the rsources under the Resource tab.

Now that you know how to create resources in GoldenGate Studio, it will help you in designing your replication flows.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

How to Migrate from On-Premises to Azure SQL Database

Pythian Group - Fri, 2016-01-15 15:14

The Azure SQL Database is improving its capabilities day-by-day. The “Cloud-first” strategy used by Microsoft is also an incentive to start using the Azure’s SQL Database as a Service (DaaS) offer.

In this article I’ll explain all the steps to move your database from on-premises to Azure, using three different approaches. You will need to choose the right one based on your migration strategy and on the database that you are migrating. Don’t forget that not all the features supported on-premises are supported on Azure, so some additional work may be needed prior to the migration.

I’ll show how to migrate a database to Azure SQL Database by using two general methods:

  • Using the SQL Server Management Studio – Recommended when there are no code compatibility issues blocking the cloud migration.
  • Using the SQL Server Data Tools – This approach is highly recommended when there are migration barriers, as the process of detecting and fixing the issues is simpler and more direct.

If you are in doubt about which one to use, the recommendation is to start by using the SQL Server Management Studio approach and, in case of failures, proceed with the SQL Server Data Tools.

Migrate Using SQL Server Management Studio

SQL Server Management Studio (SSMS) offers two direct ways to transfer a database to an Azure SQL Database. To proceed, connect to the SQL Server instance and run either the “SQL Database Deployment Wizard” or the “Export data-tier application” option from SQL Server Management Studio.

img1

If you cannot find the preferred option, you will need to update your SQL Server Management Studio (SSMS), which is now a free standalone product. You can do this by downloading the latest version.

The primary difference between the two options is that the “Deploy“ option requires an existing Database server in Azure and will directly deploy the on-premises database to that location.  The difference is that afterwards the “Export” option will create a file to be imported from the Azure portal. The exported file can be loaded straight to an Azure Blob Storage account, which will help avoid an extra step to copy the file (recommended).

NOTE: For both options, an Azure Blob Storage account with a container and an Azure SQL server are needed.

Migration Steps Using the Deployment Wizard
  1. Right-click the database and select the Deploy Database to Microsoft Azure SQL Databaseimg2
  2. Fill in the required fields.
    The server information is for the target (Azure SQL Database server). The settings to define the price tier are also configured at this stage. The bacpac file will be created locally and then applied on the Azure SQL Server, and because of this, we will need to store the bacpac file in a temporary place in the server.
  3. Click Next.

img3

  1. Review the settings and click Finish.img4
  2. Wait for the process to complete.
    At this stage the wizard will validate the database, create the DACPAC file, and apply the Azure SQL Server to create the database.

img5

  1. The database is now ready to use the server admin account to access the Azure SQL Server.

 

Migration Steps using the Export Data-Tier Application Process
  1. Right-click the database and select the Export Data-tier Application.011316_1528_HOWDOYOUMIG6.png
  2. Save the file in an Azure Blob Storage Account. You will need the account name and access key.
  3. Select the container and click Next.011316_1528_HOWDOYOUMIG7.png
  4. Click Finish, and wait for the processing to complete.
  5. Once the process completes a “Success” message is seen as shown in the screen below. Otherwise, there are items needing to be resolved to make the database capable of being converted into an Azure SQL Database.011316_1528_HOWDOYOUMIG8.png
  6. Connect to the Azure portal and choose the SQL Servers.
  7. Select the SQL Server location where the database should be created, and then click the Import Database icon as shown below.011316_1528_HOWDOYOUMIG9.png
  8. Complete the required settings, including the BACPAC file location, price tier, and server administrator’s password, and then click Create.011316_1528_HOWDOYOUMIG10.png
  9. Once the process completes, the database will be seen in the list.

011316_1528_HOWDOYOUMIG11.png

Migrate Using SQL Server Data Tools

By using the SSMS to migrate the database using a DACPAC, we don’t have the needed flexibility to properly detect and fix the found issues. For this purpose, the SQL Server Data Tools – Business Intelligence is a better option to analyze the database objects. To proceed with this option, follow the steps below.

 

Creating the Main Project
  1. Using the SQL Server Data Tools BI, click the SQL Server Object Explorer tab and connect to the on-premises instance:

011316_1528_HOWDOYOUMIG12.png

  1. Right-click the database to be migrated to Azure, and then click Create New Project.
  2. Add a name to the project and select a path to save the project files.
  3. Click next and wait for the processing to complete.

011316_1528_HOWDOYOUMIG13.png

  1. After the project is created, right-click the project root, go to properties and change the Target Platform to Azure SQL Database. Save and close.

011316_1528_HOWDOYOUMIG14.png

  1. Right-click the project and click Rebuild. If problems are detected, all the errors will be shown in the Error List.

011316_1528_HOWDOYOUMIG15.png

  1. Go to File->New->Project, give a project name (I will name it AWAzure) and in the Solution option, click Add to solution:

011316_1528_HOWDOYOUMIG16.png

 

 

Creating the New Schema

In order to filter the non-supported features and find the code to be corrected, the next step is a Schema Comparison creation. Follow the steps shown:

011316_1528_HOWDOYOUMIG17.png

  1. Now, select the options. Click the icon shown.

011316_1528_HOWDOYOUMIG18.png

  1. In the Schema Compare Options window, click to clear the following known non-supported items:
  • Aggregates
  • Application Roles
  • Assemblies
  • Asymmetric Keys
  • Broker Providers
  • Certificates
  • Contracts
  • Defaults
  • Extended Properties
  • Filegroups
  • FIleTables
  • Full-Text Stoplists
  • Full-Text Catalogs
  • Full-Text Indexes
  • Message Types
  • Partition Functions
  • Partition Schemes
  • Queues
  • Remote Service Bindings
  • Rules
  • Sequences
  • Services
  • Symmetric Keys
  • Used-Defined Types (CLR)
  • XML Indexes
  • XML Schemas Collections
  1. Click Ok and save the Schema Comparison, as it can be useful later.
  2. Select the source: The On-premises database.

011316_1528_HOWDOYOUMIG19.png

  1. Select the Target: The empty SQL Server create project.

011316_1528_HOWDOYOUMIG20.png

We will have the following:

011316_1528_HOWDOYOUMIG21.png

  1. Now, click Compare. Wait for the process to complete and then click Update (click YES in the confirmation pop-up), to update the selected target.
  1. Next, go to the AWAzure (the target) project, right-click on the root, go to properties, and change the Target Platform to Azure SQL Database.
  1. Click Save and Close the screen.

 

Resolving Problems

Now it’s time to resolve the problems. Check the errors tab and double click on each found item to open the code. Resolve the issue and save the file.

011316_1528_HOWDOYOUMIG22.png

Use the filter to ensure you are dealing with the right project.

011316_1528_HOWDOYOUMIG23.png

 

 

Deploying the Schema

After the schema revision, we can publish the database.

  1. To publish the database, right click the AWAzure project, and click Publish.
  1. Edit the target instance and connect to the Azure SQL Server:

011316_1528_HOWDOYOUMIG24.png

  1. Fill in the database name and click Publish.

img6

Moving the Data

The schema is deployed. Now it is time to move the data. To do this, use the Import and Export Wizard, from the SQL Server Management Studio.

  1. Connect to the on-premises instance, right click the database used as the data source and follow the steps shown:

img7

  1. In the wizard, confirm the Server name and the source database, and then click Next. 

img8

Now, do the same for the Azure SQL Database.

  1. In the Destination field, select SQL Server Native Client 11.0, fill in the server name, and select the target database.img9
  2. Click Next.
  3. For this step, keep the first option selected, and then click Next.img10

Select all the tables and views from the source. Notice that SQL Server will automatically map the target tables on Azure.

About data hierarchy: If foreign key constraints are being used in the database, the data migration should be made in phases to avoid failure. This needs to be analyzed prior to the final migration.

img11

  1. Make sure that all the tables are highlighted and click Edit Mappings.
  2. Select Enable Identity Insert and then click Ok.
  3. Then, in the main Wizard window click Next.

img12

  1. Make sure the Run immediately check box is selected and click Next.

img13

  1. In the following screen, review the options, and then click Finish.

img14

  1. Monitor and the data transfer and close the wizard.

img15

 

That’s it. I hope that the steps were clear and this article was useful. If you have questions, do not hesitate in post your comment or contact me using twitter (@murilocmiranda). “See” you in another article.

 

Discover more about our expertise in SQL Server

Categories: DBA Blogs