Skip navigation.

Feed aggregator

Oracle Support and Services at Oracle OpenWorld 2014

Chris Warticki - Fri, 2014-08-29 12:21
Tips and Best Practices from the Front Lines Oracle Support and Services works side by side with millions of Oracle users around the world—we see what works and what doesn't, wins and losses. Through this experience we've captured tips and best practices to save you time, help you innovate, and get the most from your Oracle investment.

While you're at Oracle OpenWorld, we invite you to take advantage of this exclusive annual opportunity to interact with Oracle Support and Services experts. With 56 sessions, hands-on demos, special events, and more, you'll find relevant, useful information based on proven patterns of customer success.

Conference Highlights

For details on all sessions, please refer to the Oracle OpenWorld Content Catalog or the Focus On Oracle Services and Support document.

Support FocusOn Documents for Customers

End-To-End ADF Cloud Deployment Process

Andrejus Baranovski - Fri, 2014-08-29 11:37
ADF and ADF BC perfectly runs on Oracle Java Cloud. You could deploy regular ADF application straight away from familiar JDeveloper environment without any hassle. With this blog post I would like to walk through the process of migrating DB model to the cloud and deploying ADF application (enabled with ADF Security) to the cloud.

Here you can download sample application - TreeComponentsCloud.zip. This application is deployed and runs on Oracle Java Cloud, accessible through this link. Online access will be available until my Oracle Java Cloud trial subscription expires (in a month or so). You can login using following credentials - username: redsam, password: We1come@ and identity domain: ltredsamuraictrial99050.

First of all, we should prepare data model - basically you could migrate your local database to the cloud (including data) using JDeveloper wizards. Database Cart wizard could be used for this purpose, simply add all required tables to the cart and set a checkbox to include the data:


You would need to enable SFTP access and note down specific SFTP connection details for Oracle Database Cloud, read more about it in Oracle Java Cloud documentation section Building the Data Model. I have defined Oracle Database Cloud connection in JDeveloper for SFTP access:


Data Model and data upload to the cloud is very seamless process - it does everything just with one click. Entire structure is packaged into archive and sent over to the cloud:


When migration process is completed, we could double check if data is in the cloud. You could expand Oracle Database Cloud connection in JDeveloper and browse through the tables, data should be accessible:


Next we should enable secure access in the cloud. Oracle Java Cloud supports regular ADF Security setup. However, to render Oracle Java Cloud login page, you must include additional security constraint into web.xml (read more about ADF Security in Configuring Security section from Oracle Java Cloud documentation). Here you can see security constraint implemented in sample application web.xml:


You should define regular ADF Security permission for page access. I'm using custom application role - AccountantAppRole:


There is enterprise role AccountantRole defined and mapped with application role from above. This enterprise role is also defined in Oracle Java Cloud service:


Finally there is user defined - redsam, the same user is defined in Oracle Java Cloud service. This user is mapped with AccountantRole enterprise role:


I have defined AccountantRole role under Users group in Oracle Java Cloud service:


This role is mapped with redsam user in the same Oracle Java Cloud service:


Deployment process is identical to the one deploying to local WebLogic server, you could use the same JDeveloper wizard - only select Oracle Cloud as target Application Server from the list:


Once application is deployed, you could login to Oracle Java Cloud service control (looks quite similar to Oracle Enterprise Manager) and check application status, etc.:


Let's do a test now. I will try to login with a user who do not have access to the application. Our sample application is protected by ADF Security, Oracle Java Cloud renders login screen automatically (no need to implement it in your custom application):


Application access will be reported as unauthorised, as expected:


Login with a valid user - redsam (see all login credentials listed in the beginning of this post):


We can access application now. Browse through tree structure and even render a colourful chart:

New Revenue Opportunity for Video Producers and Videographers

Bradley Brown - Fri, 2014-08-29 09:38
I absolutely love it when we're able to generate income for people who share their knowledge through our platform.  Take my technical training videos (on Oracle Application Express) for example.  It's so cool that I can produce a set of videos, upload it into our platform and sell the material to people around the world - and I get to maintain my brand (I'm not lost in a marketplace) on my own website.  Anyone can be making money in no time.  We see it happening EVERY day!

At the same time, most videos (at least professional videos) are produced by a videographer.  My good friend Will and I used a videographer to create our Sled Like a Pro series which teaches people how to snowmobile.  Traditionally, videographers charge a fee for creating, producing, editing a video.  They might charge $50/hr or $200/hr (or more).  But once the video is finished, they typically turn their work over to someone who creates a DVD or simply uses it as they wish.  Photographers on the other hand often retain the rights to the photo and how you use it.

As a producer of video, what if you could negotiate a royalty on all revenue generated from the product you produced?  What if you could do this without impacting the price of product?  With InteliVideo we allow you to do this.  If you're a reseller of the InteliVideo platform, you'll receive 10% of the net revenue generated - whether it's platform fees or video/product sales.

Be a reseller for InteliVideo!  Sign up for a free account and let us know you're a videographer and that you have an interest in being a reseller.  We'll get you all set up.  We'll provide you with a URL that you can distribute in your emails, which will make sure you get credit for everyone who signs up.

Here's the best part.  Your customers will be able to provide their customers with exactly what they are looking for!  Our platform allows the end customers to watch videos anytime, anywhere.  They can download their video and watch it on a plane, train or automobile.  We protect your customer's content too, so only the app on the device can access the video.

Sign your customers up today!

Log Buffer #386, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-08-29 08:16

If you ever wanted an easy-peazy way to get few of the best blog posts of the week from Oracle, SQL Server and MySQL then Log Buffer Editions are the place to be.

Oracle:

The Product Management team have released a knowledge article for Enterprise Performance Management (EPM) 11.1.2.2.x and 11.1.2.3.x containing details for EPM support with Internet Explorer (IE) 11.

As if anyone needs to be reminded, there’s a ridiculous amount of hype surrounding clouds and big data. There’s always oodles of hype around any new technology that is not well understood.

By mapping an external table to some text file, you can view the file contents as if it were data in a database table.

Vikram has discovered a utility adopreports utility in R12.2.

As a lot of the new APEX 5 features are “by developers for developers”, this one is also a nifty little thing that make our lives easier.

SQL Server:

Data Mining: Part 15 Processing Data Mining components with SSIS.

SQL Server AlwaysOn Availability Groups Fail the Initial Failover Test.

Stairway to PowerPivot and DAX – Level 6: The DAX SUM() and SUMX() Functions.

Questions about T-SQL Expressions You Were Too Shy to Ask

SQL Server Service Engine fails to start after applying CU4 for SQL Server 2008 SP1.

MySQL:

MySQL for Visual Studio 1.2.x recently became a GA version. One of the main features included in this version was the new MySQL ASP.NET MVC Wizard.

Resources for Database Clusters: Performance Tuning for HAProxy, Support for MariaDB 10, Technical Blogs & More.

Trawling the binlog with FlexCDC and new FlexCDC plugins for MySQL

InnoDB provides a custom mutex and rw-lock implementation.

You probably already know that Sphinx supports MySQL binary network protocol. But, if you haven’t heard– Sphinx can be accessed with the regular ol’ MySQL API.

Categories: DBA Blogs

Presenting at OOW 2014

DBASolved - Fri, 2014-08-29 07:26

This year I’ll be presenting at Oracle Open World with many of the best in the industry.  If you are going to be in the San Francisco area between September 28 thru October 2 2014, stop by and check out the conference.  Registration information can be found here.

The topics which I’ll be presenting or assisting with this year are:

  • OTN RAC Attack – Sunday, September 28, 2014 – 9 am – 3 pm PST @ OTN Lounge
  • How many ways can I monitor Oracle GoldenGate – Sunday, September 28, 2014 0 3:30 pm – 4:15 pm PST @ Moscone South 309
  • Oracle Exadata’s Exachk and Oracle Enterprise Manager 12c: Keeping Up with Oracle Exadata – Thursday, October 2, 2014 10:45 am – 11:30 am PST @ Moscone South 310

Hope to see you there!

Enjoy!

about.me: http://about.me/dbasolved


Filed under: General
Categories: DBA Blogs

Remote Support for Windows/UNIX/LINUX: Additional Services Series Pt. 5 [VIDEO]

Chris Foot - Fri, 2014-08-29 06:17

Transcript

When outsourcing your operating system support, you want to know that you have expert professionals with knowledge of all your platforms handling your data. At RDX, that’s something you don’t have to worry about.

Welcome back to our Additional Services series!

Whether you use Windows, UNIX or LINUX systems, we support anything and everything an admin does onsite remotely. Our Windows OS tech support includes hardware selection, monitoring and tuning, among many others. We assume total ownership of everything: your server’s security, performance, availability and improvement, and we understand the mutually dependent OS/DB relationship that affects all these things. The same things goes with UNIX and LINUX.

Financially, you pay a single bill for both database and OS support services, and you only pay for the services you need, when you need them.

For more details on our extensive operating system support services, follow the link below. We’ll see you next time!

The post Remote Support for Windows/UNIX/LINUX: Additional Services Series Pt. 5 [VIDEO] appeared first on Remote DBA Experts.

PostgreSQL vs. MySQL: Part Two

Chris Foot - Fri, 2014-08-29 01:34

Part One outlined the histories and basic foundations of PostgreSQL and MySQL, respectively.

In Part Two, we'll focus on the benefits of using both of these structures and how remote DBA professionals use them to perform mission-critical functions for enterprises.

What is a relational database management system?
Before going into further detail on PostgreSQL and MySQL, it's important to define what RDMS is, as both of these systems subscribe to this model. According to DigitalOcean, RDMS stores information by identifying related pieces of data to form comprehensive sets, or schemas. The tables are easily queried by data analysts, applications and other entities because they are made of columns defined by attributes held in rows.

MySQL: Support, advantages and drawbacks

As Carla Schroder of OpenLogic noted, MySQL is a solid choice for IT professionals working with Web architectures. It's capable of organizing unstructured information, such as the kind of data found on Twitter, Facebook and Wikipedia (all of which are powered by MySQL). DigitalOcean asserted the platform possesses sound security functions for data access and tasks that are easy to perform.

As for the disadvantages, the latter source acknowledged MySQL can sanction read tasks really well but falls somewhat short when it comes to read-write. In addition, the platform lacks a full-text search component.

PostgreSQL: Support, advantages and drawbacks
DigitalOcean maintained PostgreSQL can handle a large variety of responsibilities

quite efficiently due to its high programmability and ACID compliance. Users can implement custom procedures, a few of which can be developed to simplify intricate, common database administration operations. Because it works objectively, it can support nesting and other powerful features. Complex, customized tasks can be easily implemented and deployed.

What are its shortcomings? For one thing, it's difficult for people to find hosting services for PostgreSQL because of the sheer amount of variations. Also, its read-heavy operations can be "overkill" as DigitalOcean described it.

The post PostgreSQL vs. MySQL: Part Two appeared first on Remote DBA Experts.

Oracle Priority Service Infogram for 28-AUG-2014

Oracle Infogram - Thu, 2014-08-28 16:10

OpenWorld
Each week leading up to OpenWorld we will be publishing various announcements, schedules, tips, etc. related to the event.
Oracle WebCenter & Oracle BPM @ OpenWorld 2014: Don’t-Miss Sessions, Demos, Hands-on Labs, and More, from Oracle Fusion Middleware.
And a few presentations both at OpenWorld and other conclaves: Upcoming Big Data and Hadoop for Oracle BI, DW and DI Developers Presentations

RDBMS
From The ORACLE-BASE Blog: A few more 12c articles. From the same source: In-Memory Column Store and More
Performance
From A Wider View: What Is Oracle DB Time, DB CPU, Wall Time and Non-Idle Wait Time.
Book Review from Sonra: Book Review: Predictive Analytics Using Oracle Data Miner.
From the dbi Services Blog: Oracle 12.1.0.2: Wait event histograms in μs
SQL Developer
From that JEFF SMITH: Oracle SQL Developer: Code on Demand.
Coding
All You Ever Need to Know About Recursive SQL, from Java Code Geeks.
Data Security
From the Capgemini Oracle Blog: Securing data with Oracle Data Redaction
Hyperion
Policy for Supporting EPM System 11.1.2.2.500 and 11.1.2.3.500 with Internet Explorer 11 (Doc ID 1920566.1)
Ops Center
From the Oracle Ops Center blog: OracleSolaris 11.2 Support
EBS
From the Oracle E-Business Suite Support Blog:
RMA Calculates Tax as a Positive Value
How healthy is your Order Management data?
Are You Having Problems Finding the Procurement Community?
Understanding External Bank Account Masking
Do you find it a challenge diagnosing issues with creating Requisitions to Purchase Orders Automatically?
Channel Revenue Management and General Ledger Integration
From Oracle E-Business Suite Technology
New User Interface Features in Release 12.2.4
Mobile Apps for Oracle E-Business Suite
Oracle E-Business Suite Migration and upgrade from 11i to R12, from Riches Corner |Tech and Money Talk.
Business
5 Ways To Spot A Bad Boss In An Interview, from Forbes.

Switch Recommends a Coding Career for You and Matches You to Courses, from lifehacker.

Going to Oracle Open World? PeopleSoft Your Primary Interest?

PeopleSoft Technology Blog - Thu, 2014-08-28 15:33
We look forward to Oracle Open World every year for a number of reasons.  Chief among them is the opportunity to interact with customers and partners in person.  We also relish the opportunity to show you the latest PeopleSoft applications and tools--the stuff we've been working on over the past year.  If you are attending the conference and building your schedule, there is a handy document on-line that provides information on most or all of the PeopleSoft-focused activities at the conference, including sessions/presentations, meet the experts, demos, exhibition schedules, SIG meetings, user group gatherings and receptions, and more.  It's going to be a great week!  Hope to see you there.

Cal State Online: Public records shed light on what happened

Michael Feldstein - Thu, 2014-08-28 14:35

Last month I shared the system announcement that the Cal State Online (CSO) initiative is finished. Despite the phrasing of “re-visioning” and the retention of the name, the concept of a standalone unit to deliver and market online programs for the system is gone. Based on documents obtained by e-Literate through a public records request:[1]

  • The original concept of “a standardized, centralized, comprehensive business, marketing and outreach support structure for all aspects of online program delivery for the Cal State University System” was defined in summer 2011, formally launched in Spring 2013, and ultimately abandoned in Fall 2013;
  • CSO was only able to enroll 130 full-time equivalent students (FTES) in CY2013 despite starting from pre-existing campus-based online programs and despite minimum thresholds of 1,670 FTES in the Pearson contract;
  • CSO was able to sign up only five undergraduate degree-completion programs and two master’s programs offered at four of the 23 Cal State campuses;
  • Faculty groups overtly supported investments in online education but did not feel included in the key decision processes;
  • Pearson’s contract as a full-service Online Service Provider was in place for less than one year before contract renegotiations began, ultimately leading to LMS services only; and
  • The ultimate trigger to abandon the original model was the $10 million state funding for online education to address bottleneck courses.

That last one might seem counter-intuitive without the understanding that CSO did not even attempt to support matriculated Cal State students in state-funded programs.

Terminology note: CSO measured course enrollments as “one student registered in one online course”, such that one student taking two courses would equal two course enrollments, etc. Internally CSO calculated 10 course enrollments = 1 FTES.

Below is a narrative of the key milestones and decisions as described by the public documents. I’ll share more of my thoughts in a future post.

2011

Based on foundational work done in 2010 by the Technology Steering Committee (TSC), a group of nine campus presidents along with six Chancellor’s Office staff, a contract is awarded to a consultant (Richard Katz and Associates) to produce five reports on online learning (link will download zip file) and Cal State Universities work to date. TSC then produced an overview document for what would become CSO in June 2011, including 10 guiding principles and the first schedule estimate. An October 2011 update document further clarified the plans. Some key decisions made in 2011 included forming a separate 501(c)3 organization owned by Cal State University and funding the creation of CSO by the contribution of $50,000 from each of the 23 CSU campuses.

Two key decisions from this period are worth highlighting, as they explain much of the trajectory of CSO in retrospect. The first one defined the need for an Online Service Provider (ultimately chosen as Pearson).

A business partner for CSU Online might be needed in order to provide the necessary student support services, including, for example, advising, financial aid, career services, and tutoring. In addition, a business partner could provide the 24/7/365 help desk support absolutely critical for CSU Online. Market research and marketing of programs are other potential areas for the contributions of a business partner. Instructional design support for faculty is another potential area, as is technological support for the effort.

The second decision defined a strategy in terms of which types of online programs to add in which order.

Following from the bedrock of our Principles, the TSC supported a tactical entrance into CSU Online by focusing on those areas in which CSU campuses are already strong and proficient. We believe that it is imperative to start from a position of program strength rather than to straggle into the market in areas as yet not fully defined or ready for implementation. Accordingly, the TSC recommends that CSU Online address six areas, with two ready for immediate roll out.

  1. The 60 or so Masters level programs that exist throughout the CSU should comprise our initial effort with an eye toward serving the extensive mid-career professional and unemployed adults who are in need of this level of education to advance their careers.
  2. Our second focus should entail the presentation of two or three degree completion programs in an effort to enhance workforce development.

An important note on both of these areas is that they are both self-support, offered through continued or extended education groups and not eligible for state funding. These self-support programs do not have the same constraints on setting tuition and tend to it significantly higher than state-support mainline programs.

The overview also estimated the timeline to include an RFP for commercial partner (OSP) to be released in Fall 2011.

By late 2011 there were already signs of faculty discontent with the inclusion of faculty in CSO decision-making and with the planned usage of a commercial partner. The Cal State Dominguez Hills faculty senate resolved in November:

Growing faculty concerns about the minimal faculty input in the development of the Online Initiative, as well as the direction the Initiative may be taking have led three Academic Senates (CSUSB, CSU Stanislaus, and Sonoma State) to pass resolutions calling for the suspension of the Initiative until basic issues are addressed and approved by campus senates. In addition a “CSU Online Faculty Task Force,” consisting of over 80 faculty across the CSU, has been actively questioning features of the Initiative and has written an open letter to Chancellor Reed expressing opposition to outsourcing to for‐profit online providers or attempts to circumvent collective bargaining.

The task force open letter can be found here.

2012

The RFP was actually released in April 2012. To my reading, the document was unorganized and lacked enough structure to let bidders know what to expect or what was needed. On schedule and enrollments, the RFP advised the following:

1.5 Cal State Online expects to officially launch in January 2013, with as many as ten degree programs. For the late fall 2012 term (beginning in late October 2012) Cal State Online anticipates offering two to three courses in several programs in a live beta test term.

1.6 ENROLLMENT PROJECTIONS Vendors should base proposals on 1,000 three unit course enrollments in year one and 3,000 three unit course enrollments in year two.

The RFP evaluation process was described in the first CSO Advisory Board meeting notes from June 2012, showing the final decision to select between Pearson and Academic Partnerships. Pearson was selected as the partner, and their contract[2] has an unexplained change in enrollments.

The spending amounts detailed below (which may also be increased as appropriate, in Pearson’s discretion) are dependent on Cal State Online meeting the defined Enrollment thresholds for the prior calendar year. If Cal State Online does not meet such thresholds, the spending amounts for the then-current calendar year will be adjusted to reflect the actual number of enrollments achieved during the previous calendar year.

Pearson Thresholds

I do not know how the numbers went from an estimate of 1,000 course enrollments for 2013 in the RFP to a minimum of 16,701 course enrollments for 2013 in the contract. In retrospect, this huge increase can be described as wishful thinking, perhaps with the goals of making the financial case work for both CSO and Pearson.

The Advisory Board also decided in the June 2012 meeting to set standardized tuition for CSO at $500 per unit (compared with approximately $270 per unit for traditional campus student with 12 units per semester).

By October CSO had identified the specific campus programs interested in participating, document in the Launch Programs Report. The first page called out two of the first programs bringing in 200 students and 20 students – in other words, CSO migrated several hundred students to get started.

Launch_Programs_Report_October_2012_pdf__page_1_of_3_

2013: Winter and Spring

In the Spring 2013 term, CSO kicked off with the Launch Programs described in the February 2013 Advisory Board meeting minutes.

Launch Programs: 6 Programs from 3 Campuses

  • CSU Fullerton launched 3 courses in their online Business BA program January 14th 2013; marketing and recruiting of next group of students in progress. 35 + 18 Existing Students.
  • CSU Dominguez Hills will launch their BA MBA and PA MPA online programs in spring 2013; marketing and recruiting students is in progress. BA Applied Studies will launch in summer 2013; first CSU reconnect program.
  • CSU Monterey Bay will launch two new masters programs, Technology and MS in IT Management in spring 2013 and MS in Instructional Science and Technology will launch in summer 2013. Marketing to begin ASAP.

The notes also call out a financial model (document not shared with Advisory Board but notes taken) with three scenarios.

Three scenarios:

  • Scenerio [sic] 1: Baseline Growth Modeling where projected enrollments grom [sic] from 188 to 7500; programs grom from 3 to 25; revenues from to over $11 million and additional investment required $2.2 million. Break even in FY 12/14.
  • Scenario 2: Break Even in fiscal year 2012/14 Modeling where enrollments from from 188 to 15,750, programs grom from 3 to 30, revenues grom to over 23 million and additional investment required is $1 million.
  • Scenario 3: Best/Strong Growth where enrollments grow from 254 to 36,250, programs grow from 3 to 50, revenues grow to over $54 million and additional investment required is $1 million.

The budget planning seems to fall on fiscal years (Jul 1 – Jun 30), whereas all other CSO planning was based on calendar years. Note that the best case scenario included an additional $1 million in CSU investment, and the baseline scenario estimated 7,500 course enrollments from Fall 13 thru Spring 14. Based on an email exchange with CSU Public Affairs, Fall 13 saw almost 1,200 course enrollments, which would have required a six-fold increase in Spring 14 just to make the baseline scenario.

Update: Also in February, CSO executive director Ruth Claire Black testified at the Little Hoover Commission (an independent state oversight board in California) describing the CSO initiative as part of discussion on state needs in higher education.

By the April Advisory Board meeting, CSO was seeing some positive interest from campuses, although the numbers were fairly modest compared to previous CSO estimations.

April Launch Report

  • Fullerton business degree completion program is making good progress; 83 applications pending, 17 admitted for fall. Heavily oversubscribed for Fullerton. Good review from stundents on coaching. 50% of inquiries are for Fullerton program.
  • Dominguez Hills BS Applied Studies program starts May 4. Large cohort of existing students. 13 students admitted for summer; fall 17 students admitted.
  • The next undergraduate program will be the Northridge Reconnect program. In the next 30 days website will be updated to reflect Reconnect.
  • Fresno MBA 60 inquiries; 1 applicant and 1 admission
  • Other 4 grad programs slow build; redirect marketing resources towards masters programs
  • Fresno Homeland Security Certificate website and Humboldt Golden Four are up on website. We are seeing equal demand across the courses (3 GE courses)
  • Interest list has grown significantly; campuses who are not currently participating Cal State Online is full for fall. If existing Cal State Online campus may have capacity. Sociology at Fullerton. Dominguez Hills QA for fall start. Taking advantage of launch financial model.

The notes showed the group watching new activity from the California state legislature regarding online education, including the infamous SB 520.[3] This raised the question of what Cal State Online’s role should be with this new emphasis. [emphasis added below]

Can Cal State Online fulfill the role of putting all online? Where should we focus? State side or Cal State Online. Chancellor wants this to happen. Ruth and Marge are working on a plan. Need to be cautious to not cause confusion to students and not diminish Cal State Online.

Requirement of bill is that courses must be articulated statewide. Makes sense for Cal State Online to take ownership.

In May the CSU faculty senate passed a resolution calling on Cal State Online to promote all online programs and not just the six run through CSO.

RESOLVED: That all online degree programs offered by CSU campuses be given the same degree of prominence on the Calstateonline.com and Calstateonline.net websites as the online degree programs offered through Cal State Online; and be it further

RESOLVED: That there should be no charge for listing state­support online degree programs on the Calstateonline.com and Calstateonline.net websites;

By the June Advisory Board meeting, there was some progress for Fall enrollments, and there was concern that the state legislature did not understand the bottleneck problem.

Legislature thinks that if students knew about online courses our bottleneck problem would be solved. State is not funding FTES. Enrolling students online will need state subsidy. There is a belief that we can educate students online cheaply. There is a disconnect in Sacramento. Enrollment caps are more the issue, not bottlenecks.

There was also an enrollment presentation for the June meeting:

Download (PDF, 221KB)

2013: Summer and Fall

Despite planned meetings every two months, the CSO Advisory Board did not meet again until October, and in this interim the decision was made to abandon the original concept and to change the Pearson contract. Advisory Board members were not pleased with the process.

In early summer Pearson requested changes in the CSU/Pearson contract; wanted to increase CSU costs for services. The quality of the marketing provided by Pearson was not adequate. There were multiple meetings between Pearson and Cal State Online to resolve concerns resulting in changes to the contract.

The new marketing firm for Cal State Online is DENT; replaces Pearson; started in July 2013. So far there is a high level of satisfaction

A communication was distributed to the Advisory Board and CSU system stakeholders on October 17th regarding the Pearson/Cal State Online contract changes. The communication can be found on the Cal State Online CSYOU site [ed. no longer available].

Discussion/Comments: 

  • Members of the Advisory Board stated that there was little to no communication to them about the changes taking place. The last board meeting was a telelconference call in June and the August in-person meeting was cancelled.
    • There was a need to keep only a small number of people involved during the complicated negotiation process

The CSO entity was never formed as a 501(c)3 organization, and with the summer changes CSO would now report to Academic Affairs. The meeting notes further describe the changes.

The current Cal State Online business model will be in place until the end of 2013 and will then change. The Advisory Board will help identify opportunities and provide direction. It is anticipated that this will result in some changes in current program participation but hope that the current campuses will continue. Since campuses now have the option to use the LMS platform of their choice some campuses may elect to change to their own platform. [snip]

The Governor contributed $10 million to increase online education within the CSU. AB 386 Levine. Public postsecondary education: cross-enrollment: online education at the California State University was approved by the Governor on September 26, 2013 [emphasis added].

  • With the changes in the Pearson relationship and the passing of AB 386 we are now taking a much broader view of Cal State Online; will be used as a store front for CSU online courses. All online courses and programs in system will have Cal State Online as the store front.

The CSU faculty senate unanimously passed another resolution related to CSO in November. The resolution applauded the movement of CSO to report to Academic Affairs and the allowance for campus selection of LMS, but the real focus was the lack of faculty input in the decision-making.

RESOLVED: That the Academic Senate of the California State University (ASCSU) express its dismay that recent changes to Cal State Online were announced to system constituencies without review or input from the Cal State Online Advisory Board; and be it further [snip]

RESOLVED: That the ASCSU contend that the dissolution of the Cal State Online Board should not occur until a plan for a new governance structure that includes faculty is established, and be it further

RESOLVED: That the ASCSU recommend the establishment of a newly configured Cal State Online system­ wide advisory committee to include at least 5 faculty members, and the creation of a charge, in a partnership between the ASCSU and the Academic Affairs division of the Chancellor’s Office;

This issue – involvement in decision-making – was continued at the final Advisory Board meeting just three days after the senate resolution.

Ephraim Smith (VP Academic Affairs): The Cal State Online Board was originally created for a 501c3 organization but there was a change in direction and did not pursue 501c3; board then acted as advisory. Now that Cal State Online hase moved to Academic Affairs the question is how should it interact with constituencies; work through existing committees? Need to discuss.

There are three full pages of notes on the resultant discussion, ended in a plan to form a Commission that looks broadly at online education across the CSU.

2014

Despite the decision being made in Fall 2013 on the major changes to Cal State Online, the systemwide communication listed in my July post was not made until June 2014. The above description is mostly based on CSO documentation, but I plan to add a few of my own thoughts of the lessons learned from this short-lived online initiative in a future post.

  1. CSU officials did not respond to requests to be interviewed for this story. The offer is still open if someone would like to comment.
  2. The contract is no longer available in public, so I will only share one excerpt here.
  3. Disclosure: Michael and I wrote a white paper for 20 Million Minds Foundation calling out how Cal State Online did not attempt to address relieving bottleneck courses for matriculated students, which was the purported goal of much of the state legislative debate.

The post Cal State Online: Public records shed light on what happened appeared first on e-Literate.

Oracle EBS Techno Functional Support: Additional Services Series Pt. 4 [VIDEO]

Chris Foot - Thu, 2014-08-28 13:37

Transcript

Welcome back to our Additional Services series. Today we’re highlighting our Oracle EBS Techno Functional Support, a feature we offer to help customers to make sure their Oracle applications are running properly.

At RDX we offer full Oracle EBS support from

a team of experts, ensuring your mission-critical environments are available 24×7. Our team helps you customize your applications to meet business needs, and even provides advice about the best features to use so you can take advantage of advanced functionality. When problems do occur, RDX assigns experts to work Severity 1 issues around the clock.

Our dedicated EBS experts have cross-functional experience and adhere to industry best practices. We’ll also assign project managers to ensure we are on time and on budget with projects.

For more information on the full breadth of our Oracle EBS techno functional support, follow the link below! We’ll see you next time.

The post Oracle EBS Techno Functional Support: Additional Services Series Pt. 4 [VIDEO] appeared first on Remote DBA Experts.

Tungsten Replicator: MariaDB Master-Master and Master-Slave Topologies

Pythian Group - Thu, 2014-08-28 12:45

A common concern in the MySQL community is how to best implement high availability for MySQL. There are various built-in mechanisms to accomplish this such as Master/Master and Master/Slave replication using binary logs as well as FOSS solutions such as Galera and Tungsten, just to name a few. Often times, IT Managers and DBAs alike opt to avoid implementing a third party solution due to the added administrative overhead without fully evaluating the available solutions. In today’s blog post, I would like to describe the process for configuring a Master/Slave topology and switching to a Master/Master topology with Tungsten Replicator.

Tungsten Replicator is a well known tool that has gained much acclaim in the area of MySQL Enterprise database implementation, however, many teams tend to stay away from the implementation to avoid over-complicating the replication topology. I have listed and described all of the steps required to configure a replication topology for 1 to N nodes (today’s how-to guide serves for a 2-node implementation but I will described the additional steps that would be required to implement these topologies for N nodes).

The 2 nodes I will be using are vm128-142 and vm129-117, the first part of the document contains the steps that need to be performed on both nodes and the latter describes the steps to be performed on either one of the two nodes. As soon as Tungsten Replicator has been installed on both nodes with the same configuration files the switch is as simple as “one, two, three” – all it requires is running the script that configures the topology of your choice. The main topologies that are available are :

  • Master – Slave: Replication flowing from 1 .. N nodes using Tungsten Replicator
  • Master – Master: Bi-directional replication for 1 .. N nodes
  • Star Topology: A central node acts as a hub and all spokes are Master nodes
  • Fan-in Topology: A single slave node with replication from 1 .. N Master nodes

(Check out https://code.google.com/p/tungsten-replicator/wiki/TRCMultiMasterInstallation for further details)

So, let’s continue with the actual steps required (please note I’m using the “root” account with SSH passwordless authentication for the purposes of this article, it is best to define another user on production systems). The parameters and values in red text require customization for your system / topology. The configuration files are all indented in the text is royal blue:

### The following commands should be executed on all nodes (vm128-142 & vm129-117 in this how-to)

su - root
cd /root # or alternatively to a place like /opt/ or /usr/local/
vi /etc/yum.repos.d/MariaDB.repo

 # MariaDB 5.5 CentOS repository list - created 2014-08-25 16:59 UTC
 # http://mariadb.org/mariadb/repositories/
 [mariadb]
 name = MariaDB
 baseurl = http://yum.mariadb.org/5.5/centos6-amd64
 gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
 gpgcheck=1

vi /etc/security/limits.conf

 # add the following line
 * - nofile 65535

yum update

yum install wget MariaDB-server MariaDB-client ruby openssh-server rsync 
yum install java-1.7.0-openjdk-1.7.0.65-2.5.1.2.el6_5.x86_64 
yum install http://www.percona.com/downloads/XtraBackup/LATEST/binary/redhat/6/x86_64/percona-xtrabackup-2.2.3-4982.el6.x86_64.rpm
ln -s /usr/bin/innobackupex /usr/bin/innobackupex-1.5.1

wget http://downloads.tungsten-replicator.org/download.php?file=tungsten-replicator-2.2.1-403.tar.gz
tar -xzvf download.php\?file\=tungsten-replicator-2.2.1-403.tar.gz
rm download.php\?file\=tungsten-replicator-2.2.1-403.tar.gz
cd tungsten-replicator-2.2.1-403/

vi cookbook/COMMON_NODES.sh

 #!/bin/bash
 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 export NODE1=vm128-142.dlab.pythian.com
 export NODE2=vm129-117.dlab.pythian.com
 #export NODE3=host3
 #export NODE4=host4
 #export NODE5=host5
 #export NODE6=host6
 #export NODE7=host7
 #export NODE8=host8

vi cookbook/USER_VALUES.sh

 #!/bin/bash
 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 # User defined values for the cluster to be installed.

 cookbook_dir=$(dirname $0 )

 # Where to install Tungsten Replicator
 export TUNGSTEN_BASE=/opt/tungsten-replicator/installs/cookbook

 # Directory containing the database binary logs
 export BINLOG_DIRECTORY=/var/lib/mysql

 # Path to the script that can start, stop, and restart a MySQL server
 export MYSQL_BOOT_SCRIPT=/etc/init.d/mysql

 # Path to the options file
 export MY_CNF=/etc/my.cnf

 # Database credentials
 export DATABASE_USER=tungsten
 export DATABASE_PASSWORD=tungsten
 export DATABASE_PORT=3306

 # Name of the service to install
 export TUNGSTEN_SERVICE=cookbook

 # Replicator ports
 export RMI_PORT=10000
 export THL_PORT=2112

 # If set, replicator starts after installation
 [ -z "$START_OPTION" ] && export START_OPTION=start

 ##############################################################################
 # Options used by the "direct slave " installer only
 # Modify only if you are using 'install_master_slave_direct.sh'
 ##############################################################################
 export DIRECT_MASTER_BINLOG_DIRECTORY=$BINLOG_DIRECTORY
 export DIRECT_SLAVE_BINLOG_DIRECTORY=$BINLOG_DIRECTORY
 export DIRECT_MASTER_MY_CNF=$MY_CNF
 export DIRECT_SLAVE_MY_CNF=$MY_CNF
 ##############################################################################

 ##############################################################################
 # Variables used when removing the cluster
 # Each variable defines an action during the cleanup
 ##############################################################################
 [ -z "$STOP_REPLICATORS" ] && export STOP_REPLICATORS=1
 [ -z "$REMOVE_TUNGSTEN_BASE" ] && export REMOVE_TUNGSTEN_BASE=1
 [ -z "$REMOVE_SERVICE_SCHEMA" ] && export REMOVE_SERVICE_SCHEMA=1
 [ -z "$REMOVE_TEST_SCHEMAS" ] && export REMOVE_TEST_SCHEMAS=1
 [ -z "$REMOVE_DATABASE_CONTENTS" ] && export REMOVE_DATABASE_CONTENTS=0
 [ -z "$CLEAN_NODE_DATABASE_SERVER" ] && export CLEAN_NODE_DATABASE_SERVER=1
 ##############################################################################


 #
 # Local values defined by the user.
 # If ./cookbook/USER_VALUES.local.sh exists,
 # it is loaded at this point

 if [ -f $cookbook_dir/USER_VALUES.local.sh ]
 then
 . $cookbook_dir/USER_VALUES.local.sh
 fi

service iptables stop 

 # or open ports listed below:
 # 3306 (MySQL database)
 # 2112 (Tungsten THL)
 # 10000 (Tungsten RMI)
 # 10001 (JMX management)

vi /etc/my.cnf.d/server.cnf

 # These groups are read by MariaDB server.
 # Use it for options that only the server (but not clients) should see
 #
 # See the examples of server my.cnf files in /usr/share/mysql/
 #

 # this is read by the standalone daemon and embedded servers
 [server]

 # this is only for the mysqld standalone daemon
 [mysqld]
 open_files_limit=65535
 innodb-file-per-table=1
 server-id=1 # make server-id unique per server
 log_bin
 innodb-flush-method=O_DIRECT
 max_allowed_packet=64M
 innodb-thread-concurrency=0
 default-storage-engine=innodb
 skip-name-resolve

 # this is only for embedded server
 [embedded]

 # This group is only read by MariaDB-5.5 servers.
 # If you use the same .cnf file for MariaDB of different versions,
 # use this group for options that older servers don't understand
 [mysqld-5.5]

 # These two groups are only read by MariaDB servers, not by MySQL.
 # If you use the same .cnf file for MySQL and MariaDB,
 # you can put MariaDB-only options here
 [mariadb]

 [mariadb-5.5]

service mysql start
mysql -uroot -p -e"CREATE USER 'tungsten'@'%' IDENTIFIED BY 'tungsten';"
mysql -uroot -p -e"GRANT ALL PRIVILEGES ON *.* TO 'tungsten'@'%' WITH GRANT OPTION;"
mysql -uroot -p -e"FLUSH PRIVILEGES;"

ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_rsa.pub | ssh vm129-117 'cat >> ~/.ssh/authorized_keys' # from vm128-142
cat ~/.ssh/id_rsa.pub | ssh vm128-142 'cat >> ~/.ssh/authorized_keys' # from vm129-117
chmod 600 authorized_keys

cookbook/validate_cluster # this is the command used to validate the configuration

vi cookbook/NODES_MASTER_SLAVE.sh

 #!/bin/bash
 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 CURDIR=`dirname $0`
 if [ -f $CURDIR/COMMON_NODES.sh ]
 then
 . $CURDIR/COMMON_NODES.sh
 else
 export NODE1=
 export NODE2=
 export NODE3=
 export NODE4=
 export NODE5=
 export NODE6=
 export NODE7=
 export NODE8=
 fi

 export ALL_NODES=($NODE1 $NODE2 $NODE3 $NODE4 $NODE5 $NODE6 $NODE7 $NODE8)
 # indicate which servers will be masters, and which ones will have a slave service
 # in case of all-masters topologies, these two arrays will be the same as $ALL_NODES
 # These values are used for automated testing

 #for master/slave replication
 export MASTERS=($NODE1)
 export SLAVES=($NODE2 $NODE3 $NODE4 $NODE5 $NODE6 $NODE7 $NODE8)

## The following commands should be performed on just one of the nodes
## In my case either vm128-142 OR 129-117

cookbook/install_master_slave # to install master / slave topology
cookbook/show_cluster # here we see master - slave replication running

 --------------------------------------------------------------------------------------
 Topology: 'MASTER_SLAVE'
 --------------------------------------------------------------------------------------
 # node vm128-142.dlab.pythian.com
 cookbook [master] seqno: 1 - latency: 0.514 - ONLINE
 # node vm129-117.dlab.pythian.com
 cookbook [slave] seqno: 1 - latency: 9.322 - ONLINE

cookbook/clear_cluster # run this to destroy the current Tungsten cluster 

cookbook/install_all_masters # to install master - master topology 
cookbook/show_cluster # and here we've switched over to master - master replication

 --------------------------------------------------------------------------------------
 Topology: 'ALL_MASTERS'
 --------------------------------------------------------------------------------------
 # node vm128-142.dlab.pythian.com
 alpha [master] seqno: 5 - latency: 0.162 - ONLINE
 bravo [slave] seqno: 5 - latency: 0.000 - ONLINE
 # node vm129-117.dlab.pythian.com
 alpha [slave] seqno: 5 - latency: 9.454 - ONLINE
 bravo [master] seqno: 5 - latency: 0.905 - ONLINE

Categories: DBA Blogs

PostgreSQL vs. MySQL: Part One

Chris Foot - Thu, 2014-08-28 11:50

PostgreSQL and MySQL are both recognized as the world's most popular open source database architectures, but there are some key differences between the two.

Database administration professionals often favor both environments for their raw, customizable formats. For those who are unfamiliar with the term, open source means the code used to create these architectures is divulged to the public, allowing IT experts of every ilk to reconstruct the program to fit specific needs. While MySQL and PostgreSQL are similar in this respect, there are some key differences.

A quick history: PostgreSQL

Carla Schroder, a contributor to OpenLogic, acknowledged PostgreSQL as the older solution, having been developed at the University of California, Berkeley in 1985. Thousands of enthusiasts from around the world have participated in the development and support of this architecture. DigitalOcean labeled the solution an objective relational database management system capable of handling mission-critical applications and high-frequency transactions. Here are some other notable traits:

  • Fully complaint with atomicity, consistency, isolation and durability
  • Uses Keberos and OpenSSL for robust protection features
  • Point-in-time recovery enables users to implement warm standby servers for quick failover

A quick history: MySQL
As for MySQL, Schroder noted this particular system is about nine years younger than its predecessor

– having been created by MySQL AB in 1994. It provides a solid foundation for Web developers, as it's part of a software bundle comprised of Linux, Apache HTTP Server, MySQL and PHP. MySQL was first blueprinted to be a reliable Web server backend because it used an expedited indexed sequential access method. Over the years, experts have revised MySQL to support a variety of other storage engines, such as the MEMORY architecture that provides temporary tables.

Although open sourced, because it isn't community-based some versions (all of which are now owned and distributed by Oracle) cost a small amount of capital.

Part Two will dig deeper into these two architectures, describing use cases, their respective capabilities and more.

The post PostgreSQL vs. MySQL: Part One appeared first on Remote DBA Experts.

adopreports utility in R12.2

Vikram Das - Thu, 2014-08-28 11:01
I discovered a utility in R12.2 when I was looking for the directory of adop :

which adop
$NE_BASE/EBSapps/appl/ad/bin/adop

cd $NE_BASE/EBSapps/appl/ad/bin/
$ ls
adop  adopreports

Curious I executed adopreports
$ adopreports

Enter the APPS username: apps
Enter the APPS Password:



    Online Patching Diagnostic Reports Main Menu
    --------------------------------------------

    1.  Run edition reports
    2.  Patch edition reports
    3.  Other generic reports
    4.  Exit

    Enter your choice [4]: 3




    Other Generic Reports Sub Menu
    ------------------------------

    1.  Editions summary
    2.  Editioned objects summary
    3.  Free space in important tablespaces
    4.  Status of critical AD_ZD objects
    5.  Actual objects in current edition
    6.  Objects dependencies
    7.  Objects dependency tree
    8.  Editioning views column mappings
    9.  Index details for a table
    10.  Inherited objects in the current edition
    11.  All log messages
    12.  Materialized view details
    13.  Database sessions by edition
    14.  Table details (Synonyms, EV, etc.)
    15.  Count and status of DDL execution by phase
    16.  Back to main menu

This is a great utility for R12.2
Categories: APPS Blogs

12c: How to Restore/Recover a Small Table in a Large Database

Pythian Group - Thu, 2014-08-28 09:35

As a DBA, you will receive requests from developers or users, indicating that they deleted some data in a small table in a large database a few hours prior. They will probably want you to recover the data as soon as possible, and it will likely be a critical production database. Flashback will not be enabled, and the recycle bin will have been purged. Restoring a full database using RMAN might take you over 10 hours, and you will need a spare server with big storage. Looks like it’s going to be a difficult and time consuming task for you.

In Oracle Database 12c, there is a method available which allows us to recover the table more efficiently, and at a lower cost. The method is to create a second database (often called a stub database) using the backup of the first database. In this situation, we restore the SYSTEM, SYSAUX, and UNDO tablespaces and the the individual tablespaces that contain the data that we want to restore. After the restore is complete, we alter any tablespaces that we did not restore offline. We then apply the archived redo logs to the point in time that we want to restore the table to. Having restored the database to the appropriate point in time, we then use Oracle Data Pump to export the objects, and then you import them into the original database, again using Oracle Data Pump. Oracle Database 12c introduces new functionality in RMAN that supports point-in-time restore of individual database tables and individual table partitions.

Here is an example of when I tested this new feature:

1. The database TEST has 9 tablespaces and a schema called Howie. I created a table with 19377 records called TEST1 which is in the tablespace DATA_HOWIE.

SQL> select * from v$instance;

INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME                                                        VERSION           STARTUP_T STATUS       PAR    THREAD# ARCHIVE LOG_SWITCH_WAIT LOGINS     SHU DATABASE_STATUS   INSTANCE_ROLE      ACTIVE_ST BLO  CON_ID INSTANCE_MO EDITION FAMILY
--------------- ---------------- ---------------------------------------------------------------- ----------------- --------- ------------ --- ---------- ------- --------------- ---------- --- ----------------- ------------------ --------- --- ---------- ----------- ------- --------------------------------------------------------------------------------
1 TEST             12cServer1                                                       12.1.0.1.0        17-AUG-14 OPEN         NO           1 STARTED                 ALLOWED    NO  ACTIVE            PRIMARY_INSTANCE   NORMALNO            0 REGULAR     EE

SQL> select tablespace_name from dba_tablespaces order by tablespace_name;

TABLESPACE_NAME
------------------------------
DATA_HOWIE
DATA_TB1
DATA_TB2
DATA_TB3
SYSAUX
SYSTEM
TEMP
UNDOTBS1
USERS

9 rows selected.

SQL> conn howie
Enter password:
Connected.
SQL> create table test1 as select * from dba_objects;

Table created.

SQL> select count(*) from test1;

COUNT(*)
----------
19377

SQL> select table_name,tablespace_name from user_tables where table_name='TEST1';

TABLE_NAME                                                                                                                       TABLESPACE_NAME
-------------------------------------------------------------------------------------------------------------------------------- ------------------------------
TEST1                                                                                                                            DATA_HOWIE

2. The database is in archivelog mode, and I took a full backup of the database.

[oracle@12cServer1 RMAN]$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Sun Aug 17 20:16:17 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TEST (DBID=2146502230)

RMAN> run
{
allocate channel d1 type disk format '/u01/app/oracle/RMAN/rmn_%d_t%t_p%p';
backup
incremental level 0
tag backup_level0
filesperset 1
(database)
plus archivelog ;
release channel d1;
}2> 3> 4> 5> 6> 7> 8> 9> 10> 11>

3. The data in the table howie.test1 has been deleted.

SQL> select sysdate,current_scn from v$database;

SYSDATE             CURRENT_SCN
------------------- -----------
08/17/2014 21:01:15      435599

SQL> delete test1;

19377 rows deleted.

SQL> commit;

Commit complete.

4. I ran following scripts to recover the data to an alternative table howie.test1_temp to the point in time “08/17/2014 21:01:15″

[oracle@12cServer1 RMAN]$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Sun Aug 17 21:01:35 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TEST (DBID=2146502230)

RMAN> recover table howie.test1
until time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')"
auxiliary destination '/u01/app/oracle/aux'
remap table howie.test1:test1_temp;2> 3> 4>

5. The scripts above will take care of everything and you will see the data has been restored to howie.test1_temp

SQL> select count(*) from TEST1_TEMP;

COUNT(*)
----------
19377

SQL> select count(*) from TEST1;

COUNT(*)
----------
0

Let’s take a look at the log of RMAN recovery and find out how it works.

1. Creation of the auxiliary instance

Creating automatic instance, with SID='ktDA'

initialization parameters used for automatic instance:
db_name=TEST
db_unique_name=ktDA_pitr_TEST
compatible=12.1.0.0.0
db_block_size=8192
db_files=200
sga_target=1G
processes=80
diagnostic_dest=/u01/app/oracle
db_create_file_dest=/u01/app/oracle/aux
log_archive_dest_1='location=/u01/app/oracle/aux'
#No auxiliary parameter file used

2. Restore of the control file for the auxiliary instance

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')";
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log
sql 'alter system archive log current';
}

3. A list of datafiles that will be restored, followed by their restore and recovery in the auxiliary instance

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
# recover and open database read only
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX";
sql clone 'alter database open read only';
}

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  8 online";
# recover and open resetlogs
recover clone database tablespace  "DATA_HOWIE", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}

4. Export of tables from the auxiliary instance via Oracle Data Pump

Performing export of tables...
EXPDP> Starting "SYS"."TSPITR_EXP_ktDA_BAkw":
EXPDP> Estimate in progress using BLOCKS method...
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
EXPDP> Total estimation using BLOCKS method: 3 MB
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
EXPDP> . . exported "HOWIE"."TEST1"                             1.922 MB   19377 rows
EXPDP> Master table "SYS"."TSPITR_EXP_ktDA_BAkw" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_ktDA_BAkw is:
EXPDP>   /u01/app/oracle/aux/tspitr_ktDA_70244.dmp
EXPDP> Job "SYS"."TSPITR_EXP_ktDA_BAkw" successfully completed at Sun Aug 17 21:03:53 2014 elapsed 0 00:00:14
Export completed

5. Import of tables, constraints, indexes, and other dependent objects into the target database from the Data Pump export file

contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
IMPDP> Master table "SYS"."TSPITR_IMP_ktDA_lube" successfully loaded/unloaded
IMPDP> Starting "SYS"."TSPITR_IMP_ktDA_lube":
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
IMPDP> . . imported "HOWIE"."TEST1_TEMP"                        1.922 MB   19377 rows
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
IMPDP> Job "SYS"."TSPITR_IMP_ktDA_lube" successfully completed at Sun Aug 17 21:04:19 2014 elapsed 0 00:00:19
Import completed

6. Clean-up of the auxiliary instance

Removing automatic instance
Automatic instance removed
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_temp_9z2yqst6_.tmp deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/onlinelog/o1_mf_3_9z2yrkqm_.log deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/onlinelog/o1_mf_2_9z2yrj35_.log deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/onlinelog/o1_mf_1_9z2yrh2r_.log deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/datafile/o1_mf_data_how_9z2yrcnq_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_sysaux_9z2yptms_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_undotbs1_9z2yq9of_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_system_9z2yp0mk_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/controlfile/o1_mf_9z2yos1l_.ctl deleted
auxiliary instance file tspitr_ktDA_70244.dmp deleted
Finished recover at 17-AUG-14
Categories: DBA Blogs

Numbers: Administrative Costs Soaring? Maybe not

Michael Feldstein - Thu, 2014-08-28 09:19

August 27, 2014

There’s just a mind-boggling amount of money per student that’s being spent on administration

Andrew Gillen, quoted in “New Analysis Shows Problematic Booming Higher Ed Administrators,” Huffington Post, August 26, 2014

 Administrative growth drives up costs at state-owned universities

Debra Edrleu, TribLive, July 28, 2013

 Across U.S. higher education, nonclassroom costs have ballooned, administrative payrolls being a prime example.

Wall Street Journal as quoted by Phil Hill, e-Literate, January 2, 2013

 Administrative costs on college campuses are soaring.

J. Paul Robinson, quoted in “Bureaucrats Paid $250,000 Feed OutcryOver College Costs, Bloomberg News, November 14, 2012

 Administrative Costs Mushrooming

Georget Leff , John William Pope Center for Higher Education Policy, September 15, 2010

 

Are these true, or generalizations that lack the rigor of research? What does the data say?

Since 2004 The National Center for Education Statistics (NCES) Integrated Postsecondary Education Data System (IPEDS) financial survey of colleges and universities has reported the costs of Institutional Support in a standard form. This broad category includes “general administrative services, central executive-level activities concerned with management, legal and fiscal operations, space management, employee personnel and records, … and information technology.” In business this is often called “administration.”

Data from NCES’s Digest of Education Statistics 2012 shows decreases in cost per student from 2003-2004 through 2010-2011 except for public 4 year colleges and universities that increased expenses by 4.1% as shown in Table 1.

Institutional Support per Student

2003-04

2010-11

Change

Public 4 year

$2,212

$2,302

4.1%

Private 4 year

4,611

3,887

-15.7%

Public 2 year

1,045

875

-16.3%

Private 2 year

783

401

-48.8%

Table 1 – Cost of “administration” per enrolled student

These data are expressed in July 2014 dollars adjusted using the Consumer Price Index CPI-U so the results would be unaffected by inflation. The year 2003-2004 was selected for comparison because the data definitions and formats were the first consistent with 2010-11. Because private colleges and universities do not report operation of plant, that cost was omitted from the percentage computations of both. Headcount was used since administrative expenses are more closely related to enrollment of real students than to a mythical full-time equivalent (FTE).

These data are shown graphically in Figure 1.

Figure1

Figure 1 – Comparative Administrative Expenses 2003-2004 and 2010-2011

Data showing administration as a percent of institutional expenses omitting independent organizations, hospitals, and auxiliary enterprises, is shown in Figure 2.

Figure2

Figure 2 – Administration Expenses as a Percent of Institutional Expenses

The percentages are near equal for the two years though the administration expenses per student declined during this period except for the public 4 year colleges and universities. This reduction, likely true also for the cost of instruction, is influenced by increased enrollment and institutional budget that was typically less or about the same as 2003-2004.

The IPEDS revision introduced in the late 70s early 80s was based on program budgeting. The mission of the college or university was considered to be a combination of instruction, research, and public service—sometimes call direct costs. The library and computing was consolidated into academic support upon the belief that books would transition into electronic documents. Student services was another indirect category that includes admissions, registrar, and activities that contribute to students emotional and physical well-being, intramural athletics, and student organizations. Intercollegiate athletics and student health services may be included “except when operated as self-supporting auxiliary enterprises.”

IPEDS tried to avoid financial aid in institutional expenses of mission-based programs since, for example, it is a transfer payment of one student (tuition paid) to another (tuition discount).

NCES now makes the data from these surveys available using several different statistical tools (software).

The NCES data are very useful in analysis and in communicating with the public that seem to be receiving more opinions than facts.

This analysis is an example of verifying assertions that administration expenses are mushrooming, soaring, or ballooning.

Are administrative expenses soaring? The evidence is “no.” But that doesn’t make a sensational headline.

The post Numbers: Administrative Costs Soaring? Maybe not appeared first on e-Literate.

Building a MariaDB Galera Cluster with Docker

Pythian Group - Thu, 2014-08-28 08:13

There’s been a lot of talk about Docker for running processes in isolated userspace (or the cloud for that matter) lately. Virtualization is a great way to compartmentalise applications  and processes however the overhead of virtualization isn’t always worth it – in fact, without directly attached storage IO degradation can seriously impact performance. The solution? Perhaps Docker… With its easy to use CLI as well as the lightweight implementation of cgroups and kernel namespaces.

Without further ado, I present a step-bystep guide on how to build a MariaDB 5.5 Galera Cluster on Ubuntu 14.04. The same guide can probably be applied for MariaDB versions 10+ however I’ve stuck with 5.5 since the latest version of MariaDB Galera Cluster is still in beta.

So we start off with modifying the “ufw” firewall policy to accept forwarded packets and perform a “ufw” service restart for good measure:

root@workstation:~# vi /etc/default/ufw

DEFAULT_FORWARD_POLICY="ACCEPT"

root@workstation:~# service ufw restart
ufw stop/waiting
ufw start/running

I’m assuming you already have docker installed – this is available as a package within the Ubuntu repositories and also available in the Docker repositories (see http://docs.docker.com/installation/ubuntulinux/). You’ll also need to have LXC installed (“apt-get install lxc” should suffice) in order to attach to the Linux Containers / Docker Images.

The next step is pulling the Docker / Ubuntu repository in order to customize an image for our purposes

root@workstation:~# docker pull ubuntu
Pulling repository ubuntu
c4ff7513909d: Pulling dependent layers 
3db9c44f4520: Download complete 
c5881f11ded9: Download complete 
c4ff7513909d: Download complete 
463ff6be4238: Download complete 
822a01ae9a15: Download complete 
75204fdb260b: Download complete 
511136ea3c5a: Download complete 
bac448df371d: Download complete 
dfaad36d8984: Download complete 
5796a7edb16b: Download complete 
1c9383292a8f: Download complete 
6cfa4d1f33fb: Download complete 
f127542f0b61: Download complete 
af82eb377801: Download complete 
93c381d2c255: Download complete 
3af9d794ad07: Download complete 
a5208e800234: Download complete 
9fccf650672f: Download complete 
fae16849ebe2: Download complete 
b7c6da90134e: Download complete 
1186c90e2e28: Download complete 
0f4aac48388f: Download complete 
47dd6d11a49f: Download complete 
f6a1afb93adb: Download complete 
209ea56fda6d: Download complete 
f33dbb8bc20e: Download complete 
92ac38e49c3e: Download complete 
9942dd43ff21: Download complete 
aa822e26d727: Download complete 
d92c3c92fa73: Download complete 
31db3b10873e: Download complete 
0ea0d582fd90: Download complete 
cc58e55aa5a5: Download complete

After the download is complete, we can check the Ubuntu images available for customization with the following command:

root@workstation:~# docker images
 REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
 ubuntu              14.04.1             c4ff7513909d        12 days ago         225.4 MB
 ubuntu              trusty              c4ff7513909d        12 days ago         225.4 MB
 ubuntu              14.04               c4ff7513909d        12 days ago         225.4 MB
 ubuntu              latest              c4ff7513909d        12 days ago         225.4 MB
 ubuntu              utopic              75204fdb260b        12 days ago         230.1 MB
 ubuntu              14.10               75204fdb260b        12 days ago         230.1 MB
 ubuntu              precise             822a01ae9a15        12 days ago         108.1 MB
 ubuntu              12.04               822a01ae9a15        12 days ago         108.1 MB
 ubuntu              12.04.5             822a01ae9a15        12 days ago         108.1 MB
 ubuntu              12.10               c5881f11ded9        9 weeks ago         172.2 MB
 ubuntu              quantal             c5881f11ded9        9 weeks ago         172.2 MB
 ubuntu              13.04               463ff6be4238        9 weeks ago         169.4 MB
 ubuntu              raring              463ff6be4238        9 weeks ago         169.4 MB
 ubuntu              13.10               195eb90b5349        9 weeks ago         184.7 MB
 ubuntu              saucy               195eb90b5349        9 weeks ago         184.7 MB
 ubuntu              lucid               3db9c44f4520        4 months ago        183 MB
 ubuntu              10.04               3db9c44f4520        4 months ago        183 MB

Now that we’ve downloaded our images lets create a custom Dockerfile for our customized MariaDB / Galera Docker image, I’ve added a brief description for each line of the file:

root@workstation:~# vi Dockerfile
 # # MariaDB Galera 5.5.39/Ubuntu 14.04 64bit
 FROM ubuntu:14.04
 MAINTAINER Pythian Nikolaos Vyzas <vyzas@pythian.com>

 RUN echo "deb http://archive.ubuntu.com/ubuntu trusty main universe" > /etc/apt/sources.list # add the universe repo
 RUN apt-get -q -y update # update apt
 RUN apt-get -q -y install software-properties-common # install software-properties-common for key management
 RUN apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db # add the key for Mariadb Ubuntu repos
 RUN add-apt-repository 'deb http://ftp.cc.uoc.gr/mirrors/mariadb/repo/5.5/ubuntu trusty main' # add the MariaDB repository for 5.5
 RUN apt-get -q -y update # update apt again
 RUN echo mariadb-galera-server-5.5 mysql-server/root_password password root | debconf-set-selections # configure the default root password during installation
 RUN echo mariadb-galera-server-5.5 mysql-server/root_password_again password root | debconf-set-selections # confirm the password (as in the usual installation)
 RUN LC_ALL=en_US.utf8 DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::='--force-confnew' -qqy install mariadb-galera-server galera mariadb-client # install the necessary packages
 ADD ./my.cnf /etc/mysql/my.cnf # upload the locally created my.cnf (obviously this can go into the default MariaDB path
 RUN service mysql restart # startup the service - this will fail since the nodes haven't been configured on first boot
 EXPOSE 3306 4444 4567 4568 # open the ports required to connect to MySQL and for Galera SST / IST operations

We’ll also need our base configuration for MariaDB, I’ve included the base configuration variable for Galera – obviously there are more however these are good enough for starting up the service:

root@workstation:~# vi my.cnf
 [mysqld]
 wsrep_provider=/usr/lib/galera/libgalera_smm.so
 wsrep_cluster_address=gcomm://
 wsrep_sst_method=rsync
 wsrep_cluster_name=galera_cluster
 binlog_format=ROW
 default_storage_engine=InnoDB
 innodb_autoinc_lock_mode=2
 innodb_locks_unsafe_for_binlog=1

So far so good, we have Docker installed and our Dockerfile as well as our “my.cnf” file ready to go. Now its time to build our Docker image, check that the image exists and startup 3x separate Docker images for each of our Galera nodes:

root@workstation:~# docker build -t ubuntu_trusty/mariadb-galera .
root@workstation:~# docker images |grep mariadb-galera
 ubuntu_trusty/mariadb-galera   latest              afff3aaa9dfb        About a minute ago   412.5 MB
docker run --name mariadb1 -i -t -d ubuntu_trusty/mariadb-galera /bin/bash
docker run --name mariadb2 -i -t -d ubuntu_trusty/mariadb-galera /bin/bash
docker run --name mariadb3 -i -t -d ubuntu_trusty/mariadb-galera /bin/bash

We’ve started up our Docker images, now lets verify that they are in fact up and retrieve the process information we need to connect. We’ll need two pieces of information, the IP-Address and the Docker image name which can be received using the combination the the “docker ps” and the “docker inspect” commands:

}]root@workstation:~# docker ps
 CONTAINER ID        IMAGE                                 COMMAND             CREATED             STATUS              PORTS                                    NAMES
 b51e74933ece        ubuntu_trusty/mariadb-galera:latest   /bin/bash           About an hour ago   Up About an hour    3306/tcp, 4444/tcp, 4567/tcp, 4568/tcp   mariadb3
 03109c7018c0        ubuntu_trusty/mariadb-galera:latest   /bin/bash           About an hour ago   Up About an hour    3306/tcp, 4444/tcp, 4567/tcp, 4568/tcp   mariadb2
 1db2a9a520f8        ubuntu_trusty/mariadb-galera:latest   /bin/bash           About an hour ago   Up About an hour    3306/tcp, 4444/tcp, 4567/tcp, 4568/tcp   mariadb1
root@workstation:~# docker ps |cut -d' ' -f1 |grep -v CONTAINER | xargs docker inspect |egrep '"ID"|IPAddress'
 "ID": "b51e74933ece2f3f457ec87c3a4e7b649149e9cff2a4705bef2a070f7adbafb0",
 "IPAddress": "172.17.0.3",
 "ID": "03109c7018c03ddd8448746437346f080a976a74c3fc3d15f0191799ba5aae74",
 "IPAddress": "172.17.0.4",
 "ID": "1db2a9a520f85d2cef6e5b387fa7912890ab69fc0918796c1fae9c1dd050078f",
 "IPAddress": "172.17.0.2",

Time to use lxc-attach to connect to our Docker images using the Docker image name, add the mounts to “/etc/mtab” to keep them MariaDB friendly and customize the “gcomm://” address as we would for a usual Galera configuration (the Docker image name is a generated when the instance fires up so make sure to use your own instance name in the following commands):

root@workstation:~# lxc-attach --name b51e74933ece2f3f457ec87c3a4e7b649149e9cff2a4705bef2a070f7adbafb0
 root@b51e74933ece:~# cat /proc/mounts > /etc/mtab
 root@b51e74933ece:~# service mysql restart
 * Starting MariaDB database mysqld                            [ OK ]
 * Checking for corrupt, not cleanly closed and upgrade needing tables.

root@b51e74933ece:~# vi /etc/mysql/my.cnf
 #wsrep_cluster_address=gcomm://
 wsrep_cluster_address=gcomm://172.17.0.2,172.17.0.3,172.17.0.4

root@b51e74933ece:~# exit
 exit

root@workstation:~# lxc-attach --name 03109c7018c03ddd8448746437346f080a976a74c3fc3d15f0191799ba5aae74
 root@03109c7018c0:~# cat /proc/mounts > /etc/mtab
 root@03109c7018c0:~# vi /etc/mysql/my.cnf
 #wsrep_cluster_address=gcomm://
 wsrep_cluster_address=gcomm://172.17.0.2,172.17.0.3,172.17.0.4
 root@03109c7018c0:~# service mysql start
 * Starting MariaDB database server mysqld                            [ OK ]
 * Checking for corrupt, not cleanly closed and upgrade needing tables.
 root@03109c7018c0:~# mysql -uroot -proot
 Welcome to the MariaDB monitor.  Commands end with ; or \g.
 Your MariaDB connection id is 30
 Server version: 5.5.39-MariaDB-1~trusty-wsrep mariadb.org binary distribution, wsrep_25.10.r4014

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show status like 'wsrep_cluster%';
 +--------------------------+--------------------------------------+
 | Variable_name            | Value                                |
 +--------------------------+--------------------------------------+
 | wsrep_cluster_conf_id    | 2                                    |
 | wsrep_cluster_size       | 2                                    |
 | wsrep_cluster_state_uuid | 42bc375b-2bc0-11e4-851c-1a7627c0624c |
 | wsrep_cluster_status     | Primary                              |
 +--------------------------+--------------------------------------+
 4 rows in set (0.00 sec_

MariaDB [(none)]> exit
 Bye
 root@03109c7018c0:~# exit
 exit

root@workstation:~# lxc-attach --name 1db2a9a520f85d2cef6e5b387fa7912890ab69fc0918796c1fae9c1dd050078f
 root@1db2a9a520f8:~# cat /proc/mounts > /etc/mtab
 root@1db2a9a520f8:~# vi /etc/mysql/my.cnf
 root@1db2a9a520f8:~# service mysql start
 * Starting MariaDB database server mysqld                                                                                                                                                     [ OK ]
 root@1db2a9a520f8:~# mysql -uroot -proot
 Welcome to the MariaDB monitor.  Commands end with ; or \g.
 Your MariaDB connection id is 34
 Server version: 5.5.39-MariaDB-1~trusty-wsrep mariadb.org binary distribution, wsrep_25.10.r4014

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show status like 'wsrep_cluster%';
 +--------------------------+--------------------------------------+
 | Variable_name            | Value                                |
 +--------------------------+--------------------------------------+
 | wsrep_cluster_conf_id    | 3                                    |
 | wsrep_cluster_size       | 3                                    |
 | wsrep_cluster_state_uuid | 42bc375b-2bc0-11e4-851c-1a7627c0624c |
 | wsrep_cluster_status     | Primary                              |
 +--------------------------+--------------------------------------+
 4 rows in set (0.00 sec)

MariaDB [(none)]> exit
 Bye
 root@1db2a9a520f8:~# exit
 exit

Now be honest… Wasn’t that easier than creating multiple virtual machines and configuring the OS for each?

Enjoy your new MariaDB Galera Cluster and happy Dockering!

Categories: DBA Blogs

How to Configure an Azure Point-to-Site VPN – Part 3

Pythian Group - Thu, 2014-08-28 07:58

This blog post is the last of this series and which will demonstrate how to configure a Point-to-Site VPN step-by-step. In my first blog post, I demonstrated how to configure a virtual network and a dynamic routing gateway. This was followed by another post about how to deal with the certificate. Today we will learn how to configure the VPN client.

CONFIGURE THE VPN CLIENT
1. In the Management Portal, navigate to virtual network page; in the “quick glance” you have the links to download the VPN package.

Choose the one appropriate to your architecture (x86 or x64).

Screen Shot 2014-07-31 at 14.10.48

2. After successfully download, copy the file to your servers and execute the setup.
Screen Shot 2014-07-31 at 14.49.34

3. Click Yes when it asks if you want to install the VP and let it run.
Screen Shot 2014-07-31 at 15.09.26

4. After successful installation, it will be visible in your network connections.
Screen Shot 2014-07-31 at 15.46.07

5. In Windows 2012 you can click in the network icon, in the notification area icons (close to the clock), and it will show the right-side bar with all the network connections. You can connect from there.
The other option is right-click the connection in the “Network Connections” window (previous step) and click “Connect / Disconnect”.

6. A window will be shown, click Connect.

Screen Shot 2014-07-31 at 15.58.23

7. Now check the box near to “Do not show this message again for this Connection” and click on “Continue”.

If everything is ok, the connection will succeed.

Screen Shot 2014-07-31 at 16.07.04

8. To confirm that you are connected, execute the command “ipconfig /all” in the command line, and you should see and entry for the VPN with an IP assigned.

Screen Shot 2014-07-31 at 16.24.01

9. After a while, you will be also able to see the connection in you vNet dashboard. As you can see in the image you have data in/out in the vNet.

Screen Shot 2014-07-31 at 16.26.39

After this last part, you are done with the point-to-site VPN configuration. You can test the connectivity by executing the “ping” command and also using the “telnet” client to test if some specific port is opened and reachable.

The point-to-site VPN is recommended if you want connect users/devices to your Azure infrastructure, for few different reasons. If you need to connect the entire or part of your on-premises infrastructure, the way to go is configure a Site-to-Site VPN. Stay tuned for a blog post on how it works.

Thank you for reading!

Categories: DBA Blogs

Monitoring the Filesystem for READONLY mounts using Metric Extension in OEM12c

Arun Bavera - Thu, 2014-08-28 07:29

Our Client faced many times the mounted  filesystem going into READONLY status.

We created this User Defined Metrics or now called as Metric Extesnion to monitor and send alert.

image

 

image

 

image

 

#!/bin/sh

#echo "SlNo MountPoint MountStatus"
nl  /etc/mtab |/bin/awk '{print $1"|" $3"|"substr($5,1,2)}'

 

image

 

image

 

Credentials

Host Credentials
: Uses Monitoring Credentials of Target.

 

You have to create a NamedCredential set to test this like this and then set the username and password for this set from Security->Monitoring Credentials:

emcli create_credential_set -set_name=SOA_ORABPEL_STAGE -target_type=oracle_database -auth_target_type=oracle_database -supported_cred_types=DBCreds -monitoring -description='SOA ORABPEL DB Credentials'
Categories: Development

Missing Named Credentials in OEM 12c

Arun Bavera - Thu, 2014-08-28 06:56

We are seeing that the list sometimes doesn’t show all the named credentials.

Yet to see if this resolves the issue but need to restart OMS …

emctl set property -name oracle.sysman.emdrep.creds.region.maxcreds -value 500

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.

SYSMAN password:

Property oracle.sysman.emdrep.creds.region.maxcreds has been set to value 500 for all Management Servers

OMS restart is required to reflect the new property value

Ref:

EM 12c: Missing Named Credentials in the Enterprise Manager 12c Cloud Control Jobs Drop Down List (Doc ID 1493690.1)

Categories: Development