Feed aggregator

subnets

Pat Shuff - Tue, 2016-08-30 09:24
Let's take a step back and look at networking from a different perspective. A good reference book to start with from a corporate IT perspective is CCENT Cisco Certified Entry Networking Technician ICND1 Study Guide (Exam 100-101) with Boson NetSim Limited Edition, 2nd Edition. You don't need to read this book to get Cisco Certified but it does define terms and concepts well.

At the lowest level you start with a LAN or local area network. From the study guide, "Usually, LANs are confined to a single room, floor, or building, although they can cover as much as an entire campus. LANs are generally created to fulfill basic networking needs, such as file and printer sharing, file transfers, e-mail, gaming, and connectivity to the Internet or outside world." This typically is connected with a single network hub or a series of hubs and a router or gateway connects us to a larger network or the internet. The key services that you need on a LAN are a naming service and gateway service. The naming service allows you to find services by name rather than ip address. The gateway service allows you to connect to this service that you want to connect to that are not on your local network. It is basically as simple as that. A gateway typically also acts as a firewall and or network address translation device (NAT). The firewall either allows or blocks connections to a specific port on a specific ip address. It might have a rule that says drop all traffic or allow traffic from anywhere, from a network range, or from a specific network address. Network address translation allows you to communicate to the outside world from your desktop on a private nonroutable ip address and have the service that you are connecting to know how to get back to you. For example. my home network has an internet router that connects to AT&T. When the router connects to AT&T, it gets a public ip address from the internet provider. This address is typically something like 90.122.5.12. This is a routable address that can be reached anywhere from the internet. The router assigns an ip address to my desktop and uses the address range 192.168.1.0 to 192.168.1.100 to assign the addresses. This implies that I can have 101 devices in my house. When I connect to gmail.com to read my email I do a name search for gmail.com and get back the ip address. My desktop, assigned to 192.168.1.100 does an http get from gmail.com on port 80. This http request is funneled through my internet router which changes the ip header assigning the transmitter ip address to 90.122.5.12. It keeps track of the assignment so that a response coming back from gmail.com gets routed back to my desktop rather than my kids desktop on the same network. To gmail.com it thinks that you are connecting from AT&T and not your desktop.

It is important to take our discussion back to layer 2 and layer 3 when talking about routing. If we are operating on a LAN, we can use layer 2 multicast to broadcast packets to all computers on our local network. Most broadband routers support all of layer 3 and part of layer 2. You can't really take a video camera in your home and multicast it to your neighbors so that they can see your video feed but you can do this in your home. You can ping their broadband router if you know the ip address. Typically the ip address of a router is not mapped to a domain name so you can't really ask for the ip address of the router two houses down. If you know their ip address you can setup links between the two houses and through tcp/ip or udp/ip share video between the houses.

If we want to limit the number of computers that we can put on our home or office network we use subnet netmasks to limit the ip address range and program the router to look for all ip addresses in the netmask range. The study guide book does a good job of describing subnetting. The diagram below shows how to use a netmask to define a network that can host just over a hundred computers.

Note that we have defined a network with a network id of 192.168.1.64 by using netmask 255.255.255.192 which limits the number of computers to 127 computers. If we put a computer with ip address of 192.168.1.200 on this network we won't be able to connect to the internet and we won't be able to use layer 2 protocols to communicate to all of the computers on this network. With this configuration we have effectively created a subnet inside our network. If we combine this with the broadcast address that is used when we create our network connection we can divide our network into ranges. The study guide book goes through an exercise of setting up a nework for different floors in an office and limiting each floor to a fixed number of computers and devices.

One of the design challenges faced by people who write applications is where do you layer security and layer connectivity. Do you configure an operating system firewall to restrict address ranges that it will accept requests from? Do you push this out to the network and assume that the router will limit traffic on the network? Do you push this out to the corporate or network firewall and assume that everything is stopped at the castle wall. The real answer is yes. You should setup security at all of these layers. When you make an assumption things fall apart when someone opens an email and lets the trojan horse through the castle gates.

If you look at the three major cloud vendors they all take the same basic approach. Microsoft and Oracle don't let you configure the subnet that you are assigned to. You get assigned to a subnet and have little choice on the ip address range for the computers that you are placed upon in the cloud solution. Amazon allows you to define a subnet and ip address range. This is good and bad. It makes routing a little more difficult in the cloud and address translation needs to be programmed for the subnet that you pick. Going with vendors that assign an ip address range have hardwired routing for that network. This optimizes routing and simplifies the routing tables. Amazon faces problems with EC2 and S3 connectivity and ends up charging for data transmitted from S3 to EC2. Bandwidth is limited with these connections partly due to routing configuration limitations. Oracle and Microsoft have simpler routing maps and can put switched networks between compute and storage which provides a faster and higher throughput storage network connection.

The fun part comes when we want to connect our network which is on a non-routable network to our neighbors. We might want to share our camera systems and record them into a central video archive. Corporations face this when they want to create a cloud presence yet keep servers in their data center. Last week we talked about hiding a server in the cloud and putting our database where you can't access it form the public internet. This is great for security but what happens when we need to connect with sql developer to the database to upload a new stored procedure? We need to be able to connect to this private subnet and map it to our corporate network. We would like to be able to get to 10.10.1.122 from our network which is mapped to 192.168.1.0. How do we do this? There are two approaches. First, we can define a secondary network in our data center to match the 10.10.1.0 network and create a secure tunnel between the two network. The second is to remap the cloud network to the 192.168.1.0 subnet and create a secure tunnel between the two networks. Do you see a common theme here? You need a secure tunnel with both solutions and you need to change the subnet either at the cloud host or in your data center. Some shops have the flexibility to change subnets in their corporate network or data center to match the cloud subnet (as is required with Oracle and Microsoft) while others require the cloud vendor to change the subnet configuration to match their corporate policy (Amazon provides this).

Today we are not doing to dive deep into virtual private networks, IPSec, or secure tunnels. We are going to touch on the subjects and discuss them in depth later. The basic concept is a database developer working on their desktop needs to connect to a database server in the cloud. A Java developer working on their desktop needs to connect to a Java server in the cloud. We also need to hide the database server so that no one from the public internet can connect to the database server. We want to limit the connection to the Java server to be port 443 for secure https to public ip addresses and allow ssh login on port 22 from our corporate network. If we set a subnet mask, define a virtual private secure network between our corporate network and cloud network, and allow local desktops to join this secure network we can solve the problem. Defining the private subnet in the cloud and connecting it to our corporate network is not enough. This is going back to the castle wall analogy. We want to define firewall rules at the OS layer. We want to define routing protocols between the two networks and allow or block communication at different layers and ports. We want to create a secure connection from our sql developer, java developer, or eclipse development tools to our production servers. We also want to facilitate tools like Enterprise Manager to measure and control configurations as well as notify us of overload or failure conditions.

In summary, there are a variety of decisions that need to be made when deploying a cloud solution. Letting the application developer deploy the configuration is typically a bad idea because they don't think of all of the corporate requirements. Letting the IT Security specialist deploy the configuration is also a bad idea. The solution will be so limiting that it makes the cloud services unusable. The architecture needs to be a mix of accessibility, security, as well as usability. Network configurations are not always the easiest discussion to have but critical to have early in the conversation. This blog is not trying to say that one cloud vendor is better than the other but trying to simply point out the differences so that you as a consumer can decide what works best for your problem.

Inside the OOW Session: Content and Experience Management: Roadmap and Vision

WebCenter Team - Tue, 2016-08-30 09:06


Gearing up for Oracle OpenWorld 2016? As you prepare for the conference, we wanted to give you a run down of all that awaits you at Oracle OpenWorld (OOW). Our recent post provided an overview of all things Content and Experience Management at OOW16 so if you haven’t already done so, I do recommend bookmarking it. And, starting today, we will do a series of posts on what to expect from the must-attend sessions in content and experience management. Today we look at the main session:

Content and Experience Management: Roadmap and Vision [CON7256] 
David le Strat, Senior Director, Oracle
Monday, Sep 19, 11:00 a.m. | Moscone West—2014

In this session, you will hear Oracle executive, David le Strat discuss the key focus areas of investment and updates on the Oracle Cloud portfolio for Cloud and Experience Management, and how existing Oracle WebCenter customers can leverage the benefits of Cloud.  You will also get to see the solutions in action and explore use cases that are driving digital experiences in our customer organizations today.

Content and Experience Management is about driving the complete digital experience from mobile and social collaboration for content creation, reviews and decision making on content, to publication of the content across the multiple channels – web, social and mobile, and measuring results on the same. So why should you care? Here are three reasons why:

First, there is no doubt that we live in a mobile world today. In a world where we are expected to always be connected and need to make decisions rapidly, the best mobile collaboration experience means not just having access to all your content from your mobile devices but also enabling real time communication with teams and co-workers to build and review content. Mobile collaboration also implies being able to work the way you want to work – using mobile productivity suites like MS Office and aggregating information from back-end applications like CRM, HCM, ERP and others. Has your organization embraced mobile collaboration yet?

Second, content engagement for your ecosystem happens across multiple different channels – web, mobile, social. You need the right content strategy. Marketers, on average, spend a quarter of their budget on content and 70% of the content goes unused. To make content management and consumption more effective, a channel agnostic approach is required so that content creation, review and delivery can consistently happen across the different channels. Moreover, delivering insights on content usage and interaction is critical as it drives up the value of the content and improves the iterative content management process. What is your multi-channel content management strategy?

Third, while there is lot of focus on native mobile apps, mobile web delivers the largest share of mobile traffic. Mobile web is at the cusp of significant disruption. Customers, of course, expect engaging, interactive and content-rich sites but what if you could take advantage of native device capabilities right from your web site?

Join us here at OOW to learn more about Oracle’s:

Content and Experience Management: Roadmap and Vision [CON7256] 
David le Strat, Senior Director, Oracle
Monday, Sep 19, 11:00 a.m. | Moscone West—2014

And don’t forget to follow all things #OracleDX and #OOW16 on twitter.

PeopleSoft Partner of the Year

Duncan Davies - Tue, 2016-08-30 09:00

I would like to borrow a few minutes of your time to ask a quick favour. If you haven’t already, could you please vote ‘Gold’ for Cedar in the UK PeopleSoft Partner of the Year competition.

http://pya.ukoug.org/index.php/survey/index

Here’s why I believe we’re worth your vote:

  • We’ve been serving PeopleSoft customers for 20+ years, and in each of the last few years we’ve worked with more than 40 clients
  • We’re active members of UKOUG, with multiple employees dedicating time to its committees – quite an investment for a company our size
  • We’re active in the PeopleSoft community, with multiple blogs and free newsletters
  • We share knowledge freely, with multiple speakers at UKOUG’s Apps15 and Oracle OpenWorld last year
  • We host our own knowledge sharing events, enabling the PeopleSoft community to network and exchange ideas, whilst Cedar experts and customers share success stories
  • We’re on the cutting edge, if you want to know about Oracle Public Cloud, Fluid UI or Selective Adoption, we’re the people to speak to

I hope that’s enough to convince you. If you’ve voted for us please let me know. If you need more persuasion, please do get in touch.


Simple Cold Backup using tar

Michael Dinh - Tue, 2016-08-30 08:50

At times, I find it much simpler to shutdown my test database and create cold backup which will allow me to revert from any errors.

$ echo $ORACLE_SID
DB01

$ tar -cvzf /media/sf_OracleSoftware/$ORACLE_SID_$USER_`hostname -s`_backup.tar.gz .
./
./obj_source.out
./onlinelog/
./onlinelog/o1_mf_1_cw3syboq_.log
./onlinelog/o1_mf_2_cw3sybrt_.log
./onlinelog/o1_mf_3_cw3syc03_.log
./controlfile/
./controlfile/o1_mf_cw3sybl0_.ctl
./datafile/
./datafile/o1_mf_temp_cw3syrhz_.tmp
./datafile/o1_mf_users_cw3syrk7_.dbf
./datafile/o1_mf_sysaux_cw3syl2w_.dbf
./datafile/o1_mf_demo_cw4tz56f_.dbf
./datafile/o1_mf_undotbs1_cw3syoc6_.dbf
./datafile/o1_mf_system_cw3syd5k_.dbf

$ ll /media/sf_OracleSoftware/DB01_oracle_arrow_backup.tar.gz
-rwxrwx---. 1 root vboxsf 382676286 Aug 28 13:09 /media/sf_OracleSoftware/DB01_oracle_arrow_backup.tar.gz

++++++++++

[06:45]oracle@arrow:DB01:/oradata
$ mkdir DB01/
[06:45]oracle@arrow:DB01:/oradata
$ cd DB01/
[06:45]oracle@arrow:DB01:/oradata/DB01
$ ll
total 0
[06:45]oracle@arrow:DB01:/oradata/DB01
$ tar -xvf /media/sf_OracleSoftware/$ORACLE_SID_$USER_`hostname -s`_backup.tar.gz .
./
./obj_source.out
./onlinelog/
./onlinelog/o1_mf_1_cw3syboq_.log
./onlinelog/o1_mf_2_cw3sybrt_.log
./onlinelog/o1_mf_3_cw3syc03_.log
./controlfile/
./controlfile/o1_mf_cw3sybl0_.ctl
./datafile/
./datafile/o1_mf_temp_cw3syrhz_.tmp
./datafile/o1_mf_users_cw3syrk7_.dbf
./datafile/o1_mf_sysaux_cw3syl2w_.dbf
./datafile/o1_mf_demo_cw4tz56f_.dbf
./datafile/o1_mf_undotbs1_cw3syoc6_.dbf
./datafile/o1_mf_system_cw3syd5k_.dbf
[06:46]oracle@arrow:DB01:/oradata/DB01
$ ll
total 28
drwxr-x---. 2 oracle oinstall  4096 Aug 27 12:19 controlfile
drwxr-x---. 2 oracle oinstall  4096 Aug 27 12:19 datafile
-rw-r--r--. 1 oracle oinstall 14506 Aug 27 22:23 obj_source.out
drwxr-x---. 2 oracle oinstall  4096 Aug 27 12:19 onlinelog
[06:46]oracle@arrow:DB01:/oradata/DB01
$

The fastest way to get the Oracle sample schemas

Yann Neuhaus - Tue, 2016-08-30 00:52

Do you need the Oracle sample schemas to do a quick test or demonstration? And, as always, you did not install the sample schemas when you did the setup of your environment? The probably fastest way to get them installed is to download them from github. Installation instructions are there as well. Have fun …

 

Cet article The fastest way to get the Oracle sample schemas est apparu en premier sur Blog dbi services.

Oracle Data Integrator: Upgrade to ODI 12c - A Case Study

Rittman Mead Consulting - Mon, 2016-08-29 23:27

We don’t often get a chance to share our customer success stories due to various reasons, such as required anonymity of the client or certain other legalities. But when we do, it sure feels great! Recently, Eurocontrol, a longtime Rittman Mead client in Brussels, Belgium, performed an upgrade of Oracle Data Integrator from 11g to 12c. With the help of Rittman Mead, and specifically data integration expert Jérôme Françoisse, Eurocontrol was able to perform the upgrade successfully without any downtime. The upgrade process, and outcome, caught the attention of the Oracle Data Integration product management team, prompting the case study titled “Eurocontrol Improves ETL Development Time-to-Market After Upgrading to Oracle Data Integrator 12c” to be written and shared with the world.


Eurocontrol is the European organization for the safety of air navigation, handling management of the air traffic network, route charging, and working with other organizations to build a Single European Sky. Due to the critical air traffic demand data flowing through their ETL system, the upgrade had to be completed quickly, efficiently, and with minimal downtime. With Rittman Mead’s Oracle Data Integrator expertise, there was no issue exceeding these expectations. Beyond performing the upgrade, Jérôme and team also modified Groovy scripts, custom Knowledge Modules and other objects to work with the updated ODI SDK, and helped perform a minor cleanup of the mapping objects.

Eurocontrol can now take full advantage of the latest and greatest features that Oracle Data Integrator 12c has to offer. From flow-based mappings and component KMs to deployment specifications and reusable mappings, ODI 12c is being used in full-force. Even more impressive, Eurocontrol has not had any unplanned ODI agent downtime since the upgrade!

If your company is planning an upgrade of Oracle Data Integrator 11g to the latest version ODI 12c, as Eurocontrol did, go ahead and give Rittman Mead a shout at info@rittmanmead.com and let’s see how we can help you along the way.

Categories: BI & Warehousing

Fragmentation of table

Tom Kyte - Mon, 2016-08-29 18:46
Hi, I have a production instance which should run 24*7 I want to know How to determine and perform fragmentation of tables Please help Thank you
Categories: DBA Blogs

Split date interval into year slices

Tom Kyte - Mon, 2016-08-29 18:46
Hi, I need to split a date range into year slices like split 05-May-2015 to 05-May-2016 as start_date end_date 05_May2015 31-Dec-2015 01-Jan-2016 05-May-2016 Any help would be of great help. Thankyou
Categories: DBA Blogs

Need to Know Source of SQL Text

Tom Kyte - Mon, 2016-08-29 18:46
I have a entry in v$sql - SELECT ORG_ENTITY_ID FROM MS_QMS_FLOW_DOWN_ORG_V WHERE ACTIVITY_NAME = :B2 AND UPPER(USER_NAME) = UPPER(:B1 ). Is there a way to find out which Package/Procedure caused this Select statement to run ? PS: I am new to the f...
Categories: DBA Blogs

Now is the Time. This is the Place. ERP Cloud at OpenWorld 2016.

Linda Fishman Hoyle - Mon, 2016-08-29 17:46

A Guest Post by Oracle's Kazim Isfahani, Director, ERP Strategy (pictured left)

The excitement level is building for OpenWorld 2016, September 18 - 22 in San Francisco, CA. CFO’s, project leaders, procurement officers, along with finance and operations leaders, will gather at the premier executive destination to learn more about shifting their business to the cloud to build the business of tomorrow.

Last year, OpenWorld session attendees with an ERP content focus grew 80 percent.One year later, we have twice as many customers and we're offering more than 50 general, conference, and ‘Meet the Expert’ sessions expressly dedicated to ERP Cloud. These sessions are designed to help ERP Cloud customers and prospects hear from their peers, learn from Oracle’s experts, and maximize their time at OpenWorld.

ERP Showcase

New this year is the ERP Showcase which brings together Oracle’s ERP team and partners to help organizations create their own path to the cloud. All track sessions, product demos, theater presentations, networking, and more will be co-located on the third level in Moscone West. This is a unique opportunity to secure all of your ERP Cloud needs from one central location. ERP Showcase sponsors include Cognizant Technology Solutions, Emtec, Forsys, Huron, Infovity, and Prometheus Group.

Customers Sharing Their Journeys

Again this year, we’ll feature a plethora of customers, each willing to share their journey to the cloud. Attendees will have the opportunity to hear from senior executives from companies including Orange, Qualcomm, GE Digital, Profound Medical, City of Detroit, Alex Lee, Skanska, Hillside Family Agencies, Ballard, and the Cerebral Palsy Association of NY. These speakers bring a treasure trove of critical information and proven success recipes that will guide those of you seeking an ERP Cloud journey of your own. The learnings gleaned from these cloud veterans will prove invaluable!

ERP Cloud Execs and Sessions

Key Oracle ERP executives will be on-hand, including Rondy Ng, SVP Applications Development; Tom Anthony, VP Procurement Product Strategy; Colleen Baumbach, VP Applications Development, and Terrance Wampler, VP Financials Product Strategy. Each will be showcasing the latest product releases and associated roadmaps across the ERP family.

Here are a few of the 50 ERP Cloud not-to-be missed sessions from the Session Catalog:

  • “How Oracle’s Modern Cloud Applications Can Drive Your Success,” September 19, 11:00 am PT | Room 3008 [GEN7758]
  • “Run Your Modern Business on a Modern Cloud,” September 19, 2:30 pm PT | Room 3008 [GEN7298]
  • “Oracle Procurement—Empowering Modern Procurement,” September 19, 4:30 pm PT | Room 3001 [GEN7254]
  • “Oracle Project Portfolio Management Cloud—Seize the Digital Opportunity,” September 19, 4:30 pm PT | Room 3003 [GEN7279]
  • “Finance Takes Control and Transforms with the Cloud,” September 20, 10:45 am PT | Room 3001 [GEN7299]
  • “Managing Your Revenue by IFRS 15/ASC 606: Oracle Financials Cloud and Oracle E-Business Suite,” September 21, 11:00 am PT | Room 3016 [CON7314]
  • “Oracle Financials Cloud Implementation: A Customer Experience,” September 21, 3:00 pm PT | Room 3001 [CON6064]

 

Meet the Experts

This ever-popular activity is an opportunity for you to meet with the Oracle ERP Cloud experts, the people who design and build the applications. There are three sessions scheduled.

  • “Meet the Experts: Oracle Procurement Cloud” September 20, 11:00 am PT | Room 3001A [MTE7785]
  • “Meet the Experts: Oracle Financials Cloud” September 21, 3:00 pm PT | Room 3001A [MTE7784]
  • “Meet the Experts: Oracle Project Portfolio Management Cloud,” Sep 21, 12:15 pm PT| Room 3001A [MTE7792]

 

These interactive sessions allow for discussions on Oracle Applications strategy and customer specific business and IT strategy. The experts are available to discuss the latest releases and share insights into the best path for your enterprise. For those of you considering ERP Cloud, this is a perfect opportunity to meet with the product development and strategy team, and learn more about the product. Space is limited, so we recommend that you pre-register.

Demo Grounds and More

The Demo Grounds are a great way for you to network and discover everything our partners have to offer. Want all that ERP Cloud has planned in one place? Visit ERP Central @ OpenWorld 2016.

What About After Hours?

After a long day of learning, networking, and sharing, OpenWorld has plenty of fun planned as well.

  • Welcome Reception on Sunday, September 18 at 7:00 pm on the Oracle Cloud Plaza @ Howard Street
  • ERP Central Happy Hour on Tuesday, September 20 at 6:00 pm, on the third floor of Moscone West
  • Billy Joel concert on Wednesday, September 21 at AT&T Park, an exclusive, private event
  • Numerous partner events; visit the partners’ booths on the third level in Moscone West for details.

 

Register Now

Oracle ERP Cloud is top of mind for companies across size, industry, and geography. The ERP Cloud agenda for this premier event offers you both depth and breadth and an opportunity to hear from experienced Oracle ERP Cloud customers and to apply the best practices in your own organizations.

Filenames in AWR reports

Yann Neuhaus - Mon, 2016-08-29 14:33

If you have read my latest blog posts, you know I’ve measured IOPS with SLOB to estimate ACFS overhead on a fast storage. This blog is about something I learned after wasting one hour on the result.

Here is how I did my tests:

  1. Create a SLOB database in ACFS
  2. Run SLOB PIO tests and tag the AWR report as ‘ACFS’
  3. Move datafile to +DATA
  4. Run SLOB PIO tests and tag the AWR report as ‘ASM’

Of course, I’ve scripted to run several tests varying the number of sessions, work unit, etc. while I was doing something more productive.

While done, I got a set of AWR report and the first task was to check that they were consistent. But they were not. The datafile in ‘File IO Stats’ section did not match the tag I’ve put in the file name. First I suspected a bug in my script with bad tagging or failed datafile move. I had to read the alert.log to get that my tagging was good but filename in AWR reports was wrong. I finally looked at AWR views to understand why the filename was wrong and understood the problem:

SQL> desc DBA_HIST_DATAFILE;
Name Null? Type
----------------------------------------- -------- ----------------------------
DBID NOT NULL NUMBER
FILE# NOT NULL NUMBER
CREATION_CHANGE# NOT NULL NUMBER
FILENAME NOT NULL VARCHAR2(513)
TS# NOT NULL NUMBER
TSNAME VARCHAR2(30)
BLOCK_SIZE NUMBER
CON_DBID NUMBER
CON_ID NUMBER

There’s no SNAP_ID. AWR do not store the history of file names. We can suppose that it stores only the latest filename, but then my reports would be good as they were generated immediately after the snapshot. Or that the first name stays, but I had some reports with ‘+DATA’.

Then, I grepped for ‘WRH$_HISTORY’ in ORACLE_HOME/rdbms/admin and came upon this:

dbmsawr.sql: -- This routine updates WRH$_DATAFILE rows for the datafile name and
dbmsawr.sql: -- WRH$_DATAFILE with the current information in database.

There is an update_datafile_info procedure here in the dbms_workload_repository and the comment says something like:
This change will be captured at max after some
-- (generally 50) snapshots. So the AWR and AWR report may be wrong with
-- respect to data file name or tablespace name for that duration.

I love to work with Oracle. All information is there if you know where to look at.

So if you want to rely on filename in an AWR report after a move, you should run this procedure before taking the report. And you should run this report before the next datafile move.

Here is the example:

SQL> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
 
SQL> select file#,filename from DBA_HIST_DATAFILE where file#=6;
 
FILE# FILENAME
---------- --------------------------------------
6 /u01/DEMO/oradata/DEMO14/users01.dbf
 
SQL> select snap_id,file#,filename from DBA_HIST_FILESTATXS where file#=6 order by snap_id fetch first 10 rows only;
 
SNAP_ID FILE# FILENAME
---------- ---------- --------------------------------------
1244 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1245 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1246 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1247 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1248 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1249 6 /u01/DEMO/oradata/DEMO14/users01.dbf
6 rows selected.

My file is user01 and this is what is stored in AWR.

I rename it to users02 (thanks to 12c online move)

SQL> alter database move datafile '/u01/DEMO/oradata/DEMO14/users01.dbf' to '/u01/DEMO/oradata/DEMO14/users02.dbf';
Database altered.

but AWR is not aware of the change even after a snapshot:

SQL> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
 
SQL> select file#,filename from DBA_HIST_DATAFILE where file#=6;
 
FILE# FILENAME
---------- --------------------------------------
6 /u01/DEMO/oradata/DEMO14/users01.dbf
 
SQL> select snap_id,file#,filename from DBA_HIST_FILESTATXS where file#=6 order by snap_id fetch first 10 rows only;
 
SNAP_ID FILE# FILENAME
---------- ---------- --------------------------------------
1244 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1245 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1246 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1247 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1248 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1249 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1250 6 /u01/DEMO/oradata/DEMO14/users01.dbf

You have to wait for those 50 snapshots or run the update:

SQL> exec dbms_workload_repository.update_datafile_info;
PL/SQL procedure successfully completed.

SQL> select file#,filename from DBA_HIST_DATAFILE where file#=6;
 
FILE# FILENAME
---------- --------------------------------------
6 /u01/DEMO/oradata/DEMO14/users02.dbf
 
SQL> select snap_id,file#,filename from DBA_HIST_FILESTATXS where file#=6 order by snap_id fetch first 10 rows only;
 
SNAP_ID FILE# FILENAME
---------- ---------- --------------------------------------
1244 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1245 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1246 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1247 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1248 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1249 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1250 6 /u01/DEMO/oradata/DEMO14/users02.dbf

But as you see no history about previous names.

Note that if you look at the table behind the view, there’s a SNAP_ID but it’s not part of the primary key. It is used by the purge procedures.

 

Cet article Filenames in AWR reports est apparu en premier sur Blog dbi services.

Letting GoldenGate automatically maintain the insert and update timestamps on the target

Yann Neuhaus - Mon, 2016-08-29 11:18

Today at one of our GoldenGate customers we were faced with the following requirement: For one of the tables on the target we needed two additional columns which shall hold the insert and update timestamps for when the row was either inserted or updated. Just to be clear: It was not about getting the timestamps for the inserts or updates as they happened on the source but the target. In this post we’ll look at how GoldenGate can be configured to do the work requested.

All the below examples work with the well known scott/tiger schema. There is one extract running for capturing on the source and one replicat is re-playing the changes on the target (over sqlnet, no integrated mode here). This is the current status of the streams:

GGSCI (oelogg1) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTRSCO     00:00:00      00:00:02    
REPLICAT    RUNNING     REPLSCO     00:00:00      00:00:01    


GGSCI (oelogg1) 2> view params EXTRSCO

EXTRACT extrsco
USERIDALIAS DB1 DOMAIN admin
GETUPDATEBEFORES
REPORT AT 23:40
DDL INCLUDE OPTYPE TRUNCATE OBJNAME ARBOR.*, &
    INCLUDE OPTYPE ALTER OBJNAME ARBOR.*
EXTTRAIL /u01/app/ogg/product/12.1.2.1.9/dirdat/es
TABLE SCOTT.*;

GGSCI (oelogg1) 3> view params REPLSCO

REPLICAT REPLSCO
USERIDALIAS DB2 DOMAIN admin
GETUPDATEBEFORES
GETTRUNCATES
APPLYNOOPUPDATES
DDL INCLUDE OPTYPE TRUNCATE OBJNAME SCOTT.*, &
    INCLUDE OPTYPE ALTER OBJNAME SCOTT.* &
    EXCLUDE ALL INSTR 'CONSTRAINT' &
    EXCLUDE ALL INSTR 'TRIGGER' 
ASSUMETARGETDEFS
DBOPTIONS DEFERREFCONST
MAP SCOTT.*, TARGET SCOTT.*;

Pretty basic, no unusual stuff here. The table we’ll use for the scope of this post is the “project” table which has the following contents in a fresh scott/tiger installation:

SQL> col DESCRIPTION for a50
SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE
---------- -------------------------------------------------- ------------ ------------
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

Of course the table looks the same on the target:

SQL> col DESCRIPTION for a50
SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE
---------- -------------------------------------------------- ------------ ------------
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

To prove that the streams are really working lets add an additional row to the source:

SQL> insert into project values (1005, 'my fun project 1', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

SQL> 

… and then check if the row indeed was replicated to the target:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE
---------- -------------------------------------------------- ------------ ------------
      1005 my fun project 1				      29-AUG-16    28-SEP-16
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

Ok, looks fine, so back to the initial requirement. We need two additional columns on the target:

alter table SCOTT.PROJECT add create_dt timestamp with time zone;
alter table SCOTT.PROJECT add update_dt timestamp with time zone;
alter table SCOTT.PROJECT modify create_dt default to_date('01.01.2000','DD.MM.YYYY');
alter table SCOTT.PROJECT modify update_dt default to_date('01.01.2000','DD.MM.YYYY');

In our case we needed to set a default value as both column are not allowed to contain NULL values. For the moment the content of the table on the target is:

SQL> col CREATE_DT for a20
SQL> col UPDATE_DT for a20
SQL> select * from project;

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT	     UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ -------------------- --------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

In real life when the table which will be extended holds millions of rows the following will probably be too simple and you’ll need to spend some time on thinking on how you organize the updates. For the scope of this post this is fine:

update SCOTT.PROJECT set create_dt = to_date('01.01.2000','DD.MM.YYYY') where create_dt is null;
update SCOTT.PROJECT set update_dt = to_date('01.01.2000','DD.MM.YYYY') where update_dt is null;
commit;
alter table SCOTT.PROJECT modify create_dt not null;
alter table SCOTT.PROJECT modify update_dt not null;

From now on we have identical insert and update timestamps for all of the rows on the target:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01                01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

A final check on the source for being sure that the default values work:

SQL> insert into project values (1006, 'my fun project 2', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

On the target:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

All fine. Time to do the GoldenGate work. Obviously the first step is to stop the replicat:

GGSCI (oelogg1) 2> stop REPLSCO

Sending STOP request to REPLICAT REPLSCO ...
Request processed.

GGSCI (oelogg1) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTRSCO     00:00:00      00:00:09    
REPLICAT    STOPPED     REPLSCO     00:00:00      00:00:01    

Lets do the insert case in a first step by adding the following line (COLMAP) to the replicat configuration:

REPLICAT REPLSCO
USERIDALIAS DB2 DOMAIN admin
GETUPDATEBEFORES
GETTRUNCATES
APPLYNOOPUPDATES
DISCARDFILE /u01/app/ogg/product/12.1.2.1.9/dirdsc/dscsco.dsc, APPEND, MEGABYTES 10
DDL INCLUDE OPTYPE TRUNCATE OBJNAME SCOTT.*, &
    INCLUDE OPTYPE ALTER OBJNAME SCOTT.* &
    EXCLUDE ALL INSTR 'CONSTRAINT' &
    EXCLUDE ALL INSTR 'TRIGGER' 
ASSUMETARGETDEFS
DBOPTIONS DEFERREFCONST
MAP SCOTT.PROJECT, TARGET SCOTT.PROJECT, COLMAP (usedefaults,
    create_dt = @IF (@STREQ (@GETENV ('GGHEADER', 'OPTYPE'), 'INSERT'), @DATENOW(), @COLSTAT (MISSING)));
MAP SCOTT.*, TARGET SCOTT.*;

Start again:

GGSCI (oelogg1) 6> start REPLSCO

Sending START request to MANAGER ...
REPLICAT REPLSCO starting


GGSCI (oelogg1) 7> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTRSCO     00:00:00      00:00:01    
REPLICAT    RUNNING     REPLSCO     00:00:00      00:00:00    

Looks good from a configuration perspective. Time to start:

SQL> insert into project values (1007, 'my fun project 3', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

On the target we should now see the exact insert date of the record instead of the default value of the column:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

Perfect, this works. Lets go on with the update case: All we need is to add the additional column for the update case and populate it:

REPLICAT REPLSCO
USERIDALIAS DB2 DOMAIN admin
GETUPDATEBEFORES
GETTRUNCATES
APPLYNOOPUPDATES
DISCARDFILE /u01/app/ogg/product/12.1.2.1.9/dirdsc/dscsco.dsc, APPEND, MEGABYTES 10
DDL INCLUDE OPTYPE TRUNCATE OBJNAME SCOTT.*, &
    INCLUDE OPTYPE ALTER OBJNAME SCOTT.* &
    EXCLUDE ALL INSTR 'CONSTRAINT' &
    EXCLUDE ALL INSTR 'TRIGGER' 
ASSUMETARGETDEFS
DBOPTIONS DEFERREFCONST
MAP SCOTT.PROJECT, TARGET SCOTT.PROJECT, COLMAP (usedefaults,
    create_dt = @IF (@STREQ (@GETENV ('GGHEADER', 'OPTYPE'), 'INSERT'), @DATENOW(), @COLSTAT (MISSING)),
    update_dt = @IF (@VALONEOF (@GETENV ('GGHEADER', 'OPTYPE'), 'UPDATE', 'SQL COMPUPDATE', 'PK UPDATE' ), @DATENOW(), @COLSTAT 
(MISSING)));
MAP SCOTT.*, TARGET SCOTT.*;

Then stop and start the replicat to bring the changes into effect. Lets create a new row just to see that this does not have any effect on the insert case:

SQL> insert into project values (1008, 'my fun project 4', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

What we should see on the target are two rows with an exact insert date but a default update date:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1008 my fun project 4				      29-AUG-16    28-SEP-16	29-AUG-16 12.58.40.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

Perfect. What about the update itself?

SQL> update project set DESCRIPTION = upper(description) where PROJECTNO = 1008;

1 row updated.

SQL> commit;

Commit complete.

This should result in one exact update date for my fun project 4:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1008 MY FUN PROJECT 4				      29-AUG-16    28-SEP-16	29-AUG-16 12.58.40.000000 PM +02:00 29-AUG-16 01.04.49.000000 PM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

8 rows selected.

Perfect, works fine, too. Just to be sure that deletes still work lets do a final test:

SQL> delete from project where PROJECTNO = 1008;

1 row deleted.

SQL> commit;

Commit complete.

SQL> 

The two additional columns should not prevent Goldengate from being able to delete the rows, lets see:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

The fun project 4 is gone and all works as expected. Hope this helps ….

 

Cet article Letting GoldenGate automatically maintain the insert and update timestamps on the target est apparu en premier sur Blog dbi services.

networking differences between cloud providers

Pat Shuff - Mon, 2016-08-29 09:00
In this blog entry we are going to perform a simple task of enabling an Apache Web Server on a Linux server and look at how to do this on the Oracle Cloud, Amazon AWS, and Microsoft Azure. Last week we did this for the Oracle Cloud but we will quickly review this again. As we go down this path we will look at the different options presented to you as you create a new instance and see how the three cloud vendors diverge in their approach to services. Which version of Linux we select is not critical. We are looking at the cloud tooling and what is required to deploy and secure an instance. Our goals are
  • Deploy a Linux instance into a cloud service
  • Enable port 22 to allow us to communicate from our desktop into the Linux instance
  • Enable port 80 to allow us to communicate from the public internet into the Linux instance
  • Disable all other services coming into this instance.
  • We will use DHCP initially to get an ip address assigned to us but look at static ip addresses in the end

Step 1:Deploy a Linux instance into a small compute service. Go with the smallest compute shape to save money, go with the smallest memory allocation because we don't need much for a test web server, go with the default network interfaces and have an ip address assigned, go with the smallest disk you can to speed up the process.

Step 1a - Oracle Public Cloud

We go to the Compute Console and click on Create Instance. This takes us through screens that allow us to select an operating system, core count and memory size. When we get to the instance config we have the option of defining network security rules with a Security List. We can either create a new security list or select an existing security list. We will in the end select the default that allows us to connect to port 22 and modify the security list at a later point. We could have selected the WebServer entry from the Security List because we have done this before. For this exercise we will select the default and come back later and add another access point. Once we get to the review screen we can create the instance. The only networking questions that we were asked was what Security List definition do we want.

Step 1b - Amazon AWS

We go to the EC2 Console and click on EC2 followed by Launch Instance. From the launch screen we select a Linux operating system and start the configuration. Note that the network and subnet menus allow you to deploy your instance into an ip address range. This is different than the Oracle Cloud where you are assigned into a non-routable ip address range based on the server that you are dropped into. Since these are private ip addresses for a single server this is really not a significant issue. We are going to accept the defaults her and configure the ports in a couple of screens. We are going to go with a dhcp public ip address to be able to attach to our web server.

We accept the default storage and configure the ports that we want to open for our instance. We can define a new security group or accept an existing security group. For this example we are going to add http port 80 since it is a simple add at this point and move forward with this configuration. We could go with a predefined configuration that allows port 80 and 22 but for this example we will create a new one. We then review and launch the instance.

Step 1c - Microsoft Azure

We go to the Azure Portal and click on Virtual Machine -> Add which takes us to the Marketplace. From here we type in Linux and pick a random Linux operating system to boot from. We are assigned a subnet just like we were with the Oracle Cloud and have the ability to add a firewall rule to allow port 80 and 22 through from the public internet. Once we have this defined we can review and launch our instance.

Step 2: Log into your instance and add the apache web server. This can easily be done with a yum install apache2 command. We then edit the /var/www/index.html file so that we can see an answer from the web server.

Step 3: Verify the network security configuration of the instance to make sure that ports 80 and 22 are open.

Step 3a: Oracle Cloud

When we created the instance we went with the default network configuration which only has port 22 open. We now need to add port 80 as an open inbound port for the public internet. This is done by going to the Compute Instance console and viewing our web server instance. By looking at the instance we can see that we have the default Security List associated with our instance. If we have a rule defined for port 80 we can just click on Add Security List and add the value. We are going to assume that we have not defined a rule and need to do so. We create a new rule which allows us to allow http traffic from the public internet to our security list WebServer. We than need to go back and add a new Security List to our instance and select WebServer which allows port 80 and 22.

Step 3b and 3c: AWS and Azure

We really don't need to do anything here because both AWS and Azure gave us the ability to add a port definition in the menu creation system. Had we selected a predefine security list there would be no step 3 for any of the services.

Surprisingly, we are done. Simple network configuration is simple for all three vendors. The key differences that we see are that Amazon and Microsoft give you the ability to define individual port definitions as you create your instance. Oracle wants you to define this with Security Rules and Security Lists rather than one at a time for each instance. All three platforms allow you to configure firewall rules ahead of time and add those as configurations. In this example we were assuming a first time experience which is not the normal way of doing things. The one differential that did stand out is that Amazon allows you to pick and choose your subnet assignment. Oracle and Microsoft really don't give you choices and assign you an ip range. All three give you the option of static of dynamic public ip addresses. For our experiment there really isn't much difference in how any of the cloud vendors provision and administer firewall configurations.

ODA X6 database classes and shapes

Yann Neuhaus - Mon, 2016-08-29 08:47

On the Oracle Database Appliance, like on the Oracle public Cloud, you define the CPU capacity with ‘shapes’. On the latest ODA, the X6, we have a new interface to provision a database. Let’s look at the different shapes available.

ODACLI

You can provision a new database with the command line ODACLI which replaces the OAKCLI you used in ODA X5:

[root@odax6m ~]# odacli create-database
Usage: create-database [options] Options:
* --adminpassword, -m
Password for SYS,SYSTEM and PDB Admin
--cdb, -c
Create Container Database
Default: false
--dbclass, -cl
Database Class OLTP/DSS/IMDB
Default: OLTP
--dbconsole, -co
Enable Database Console
Default: false
--dbhomeid, -dh
Database Home ID (Use Existing DB Home)
* --dbname, -n
Database Name
--dbshape, -s
Database Shape{odb1,odb2,odb3 etc.}
Default: odb1
--dbstorage, -r
Database Storage {ACFS|ASM}
Default: ACFS
--dbtype, -y
Database Type {SI|RAC}
Default: SI
--help, -h
get help
Default: false
--instanceonly, -io
Create Instance Only (For Standby)
Default: false
--json, -j
json output
Default: false
--pdbadmin, -d
Pluggable Database Admin User
Default: pdbadmin
--pdbname, -p
Pluggable Database Name
Default: pdb1
--targetnode, -g
Node Number (for single-instance databases)
Default: 0
--version, -v
Database Version
Default: 12.1.0.2

ODA WebConsole

But the ODA X6 has also a small graphical interface from the web console.

CaptureODACreate

12c multitenant is the default, but you can choose.

Edition

You don’t have the choice when you create the database. You install the ODA in Standard or Enterprise and then you cannot change.

Versions

Two database versions are available: 11.2.0.4 and 12.1.0.2

CaptureODAVersions

You choose ODA to get a stable, certified and supported system so it make sense to run only supported versions with latest PSU. If you have older versions, you must upgrade. Set optimizer_features_enable to previous if your application was not tuned for newer versions. Very often, when an ISV do not certify his software it’s because of optimizer regressions. With proper testing and optimizer settings you should be able to upgrade any application without the risk of regression.

Templates

There are four DBCA templates available

  • Standard Edition or Enterprise Edition
  • Multitenant or non-CDB

The main difference between Enterprise Edition and Standard Editions are:
Options OMS,SPATIAL,CWMLITE,DV are installed in Enterprise Edition but not in Standard Edition
fast_start_mttr_target=300 in Enterprise Edition (feature not supported in Standard Edition)

The main difference between multitenant and non-CDB:
Options JSERVER,IMEDIA,ORACLE_TEXT,APEX are installed in a CDB an not in a non-CDB
maxdatafiles=1024 in CDB (100 in non-CDB)

All templates are configured with filesystem_io_options=setall and use_large_pages=only

Following underscore parameters are set for all ODA templates:
*._datafile_write_errors_crash_instance=FALSE
*._disable_interface_checking=TRUE
*._enable_NUMA_support=FALSE
*._file_size_increase_increment=2143289344
*._gc_policy_time=0
*._gc_undo_affinity=FALSE

Note that both 12c and 11g are available in Enterprise Edition as well as Standard Edition (can even be Standard Edition One for 11g).
Of course, CDB is only for 12c.

Shapes

As in the Oracle Public Cloud, the CPU and Memory comes in shapes:

CaptureODACShape

The choice is the number of core. The cores are hyperthreaded, which means that odb1 will have cpu_count=2. And it is set in spfile. Note that at install no resource manager plan is active so instance caging will not occur except during the automatic maintenance window… My recommandation is to set a plan. In 12.1.0.2 Standard Edition resource manager is implicitly activated.

ODA X6-2 processors are Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz. Here is an example of the LIOPS you can reach when running on all the 40 threads of a X6-2M:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 39.9 545.6 0.00 25.43
DB CPU(s): 38.8 530.9 0.00 24.75
Logical read (blocks): 18,494,690.4 252,862,769.1

This is 18 million logical reads per seconds in this 2 sockets (2s10c20t) appliance. Half of it on the X6-2S which has one socket 1s10c20t.

The core factor for those processors is 0.5 which means that you can run an Enterprise Edition ‘odb2′ with a single processor license (public price 47,500$) and you can run 4 sessions in CPU which means more that you can do nearly 3 million logical reads per second, as here:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 13.80
DB CPU(s): 4.0 54.5 0.00 13.78
Logical read (blocks): 2,901,991.5 39,660,331.1

Those shapes are defined as:

CaptureODAShapes

Given the high LIOPS and the available memory, this entry-level appliance can be sufficient for most of medium OLTP workload.

Classes

Three classes are defined to derive the database parameters from the shape.

CaptureODAClasses

The SGA/PGA is calculated from the shape memory and a class factor.
OLTP gives 50% to SGA and 25% to PGA which means that for example a odb4 has sga_target=16 and pga_aggregate_target=8G
DSS gives 25% to SGA and 50% to PGA

Note that OLTP is the only one available in Standard Edition. This does not mean that you can run only OLTP. You can change memory settings later (DSS usually need more PGA than SGA) and you have very good storage bandwidth and IOPS (NVMe access to SSD). This setting is more an indication that most of datawarehouses need features available only on Enterprise Edition such as parallel query, partitioning, bitmap indexes.

ASM or ACFS?

CaptureODAStorage

The template shapes above define a 100GB database. When you create a new database you have the choice to put it directly on +DATA and +RECO, or create a 100GB ADVM volume and ACFS filesystem that will be mounted under /u02/app/oracle/oradata. If you choose ACFS the FRA and REDO will be created under the /u03/app/oracle mount point which is a common ACFS.

The default is ACFS but you should think about it. For production, best performance is ASM. You have SSD to reduce avoid disk latency. You have NVMe to reduce CPU latency. You don’t want to add the ACFS layer. The maximum IOPS we observe is 10 times higher with datafiles directly on ASM:

@FranckPachot Don't know? We get 75k IOPS on ACFS and 700k IOPS on pure ASM. /cc @kevinclosson pic.twitter.com/TcLzUsOh0d

— Marco Mischke (@DBAMarco) August 29, 2016

For test databases, where you use snapshot features, especially with multitenant, you may choose ACFS. However, why not create the CDB in ASM and use ACFS for the PDBs you will want to snapshot? No need for that additional layer for the CDB files. Better to isolate the master and clones for a specific environment into its own ACFS.

And anyway, ODA X6-2S and X6-2M are very interesting for Standard Edition, and you cannot use snapshots nor any ACFS features for a database in Standard Edition.

Storage performance is truly amazing. At 100000 IOPS we have 99% single block reads faster than 256 milliseconds and 97% faster than 128 ms. At 800000 IOPS here are the figures:

% of Waits
-----------------------------------------------
Total
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms 1s
------------------------- ------ ----- ----- ----- ----- ----- ----- ----- -----
db file parallel read 6.9M 4.0 74.1 21.7 .2 .0 .0 .0
db file sequential read 18.4M 83.1 16.3 .7 .0 .0 .0

So what?

It’s nice to have an easy GUI to provision a database on ODA. However there are some limits with it:

  • Be careful on the defaults. They may not fit what you want. Do you want you databases on ACFS?
  • Not all operations can be done though the GUI: you can create but not delete a database.

But there’s more. Performance is there. You can run application that need high performance.

Do you know any other solution which gives you a fully certified system installed in few hours with databases ready? With very good hardware and managed software costs (NUP, Standard Edition in socket metric or Entrerprise Edition capacity-on-demand by multiple of 1 processor license).
You need high-availability? In Standard Edition you cannot use Data Guard. In Standard Edition you can buy Dbvisit standby which gives you switchover and failover (Recovery Point Objective of few minutes) to a second ODA or to a cloud service. Of course, you can build or buy custom scripts to manage the manual standby. However, if you go to ODA you probably appreciate easy and robust software.

 

Cet article ODA X6 database classes and shapes est apparu en premier sur Blog dbi services.

What Does Oracle PARTITION BY Do?

Complete IT Professional - Mon, 2016-08-29 06:00
Have you ever seen the PARTITION BY keyword used in Oracle queries? If so, do you know what it does and how to use it? Learn about the Oracle PARTITION BY keyword and see some examples in this article. What is the Oracle PARTITION BY Keyword? PARTITION BY is a keyword that can be used […]
Categories: Development

PeopleSoft User Security

When performing a PeopleSoft security audit, reconciling users should be one of the first tasks. This includes default accounts created through the installation of PeopleSoft as well as user accounts associated with staff, vendors and customers.

The following are several of the topics that Integrigy investigates during our PeopleSoft security configuration assessments - take a look today at your settings:

  • Default accounts - PeopleSoft default application user accounts with superuser privileges where possible should be removed or have their password changed. Carefully consult your documentation but this is a key task.

Default Oracle PeopleSoft Users

BELHR

JCADMIN1

PSJPN

CAN

NLDHR

PSPOR

CFR

PS

TIME

CNHR

PSCFR

UKHR

ESP

PSDUT

UKNI

FRA

PSESP

USA

FRHR

PSFRA

HSHR

GER

PSGER

WEBGUEST

GRHR

PSINE

WEBMODEL

 

  • Stale users – users that have not logged on in months or years should be identified and removed. Use the following SQL to locate stale users:
SELECT * FROM SYSADM.PSPTLOGINAUDIT;

To manage accounts, the following navigation can assist. As it cannot be mentioned enough, BEFORE you disable or delete any user TEST in non-production first.

User management:

  1. Select PeopleTools, Security, User Profiles, User Profiles
  2. Select user to disable or delete
  3. If disabling, check Account Locked Out check box


 

If you have questions, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP

References

PeopleSoft Security Quick Reference

Auditing, Oracle PeopleSoft
Categories: APPS Blogs, Security Blogs

Performance issue

Tom Kyte - Mon, 2016-08-29 00:26
Hi Team, We have a database in there we have observed every day morning 4 to 5 AM during this one hr application user not be able to insert data into the table. We seen that in application web page where red count high means not be able to insert ...
Categories: DBA Blogs

Intermittent Ora-06502: PL/SQL: numeric or value error - For Update Skip locked

Tom Kyte - Mon, 2016-08-29 00:26
Hi Tom, We are getting Intermittent Ora-06502: PL/SQL: numeric or value error - For Update Skip locked. This error occurs only when application (.Net) calls this SP and it occurs at last pending row from table. But we are able to execute same SP f...
Categories: DBA Blogs

Gather_table_stats when UPDATING rows in a table using rownum as predicate

Tom Kyte - Mon, 2016-08-29 00:26
Hello Tom, I came across the structure where in users creating the table or updating the table uses the predicate as "rownum exec dbms_stats.gather_table_stats(xx,xx)". Can you help us to explain how this predicate structure how it works in Cre...
Categories: DBA Blogs

Are analytic RDBMS and data warehouse appliances obsolete?

Curt Monash - Sun, 2016-08-28 20:28

I used to spend most of my time — blogging and consulting alike — on data warehouse appliances and analytic DBMS. Now I’m barely involved with them. The most obvious reason is that there have been drastic changes in industry structure:

Simply reciting all that, however, begs the question of whether one should still care about analytic RDBMS at all.

My answer, in a nutshell, is:

Analytic RDBMS — whether on premises in software, in the form of data warehouse appliances, or in the cloud – are still great for hard-core business intelligence, where “hard-core” can refer to ad-hoc query complexity, reporting/dashboard concurrency, or both. But they aren’t good for much else.

To see why, let’s start by asking: “With what do you want to integrate your analytic SQL processing?”

  • If you want to integrate with relational OLTP (OnLine Transaction Processing), your OLTP RDBMS vendor surely has a story worth listening to. Memory-centric offerings MemSQL and SAP HANA are also pitched that way.
  • If you want to integrate with your SAP apps in particular, HANA is the obvious choice.
  • If you want to integrate with other work you do in the Amazon cloud, Redshift is worth a look.

Beyond those cases, a big issue is integration with … well, with data integration. Analytic RDBMS got a lot of their workloads from ELT or ETLT, which stand for Extract/(Transform)/Load/Transform. I.e., you’d load data into an efficient analytic RDBMS and then do your transformations, vs. the “traditional” (for about 10-15 years of tradition) approach of doing your transformations in your ETL (Extract/Transform/Load) engine. But in bigger installations, Hadoop often snatches away that part of the workload, even if the rest of the processing remains on a dedicated analytic RDBMS platform such as Teradata’s.

And suppose you want to integrate with more advanced analytics — e.g. statistics, other predictive modeling/machine learning, or graph analytics? Well — and this both surprised and disappointed me — analytic platforms in the RDBMS sense didn’t work out very well. Early Hadoop had its own problems too. But Spark is doing just fine, and seems poised to win.

My technical observations around these trends include:

  • Advanced analytics commonly require flexible, iterative processing.
  • Spark is much better at such processing than earlier Hadoop …
  • … which in turn is better than anything that’s been built into an analytic RDBMS.
  • Open source/open standards and the associated skill sets come into play too. Highly vendor-proprietary DBMS-tied analytic stacks don’t have enough advantages over open ones.
  • Notwithstanding the foregoing, RDBMS-based platforms can still win if a big part of the task lies in fancy SQL.

And finally, if a task is “partly relational”, then Hadoop or Spark often fit both parts.

  • They don’t force you into using SQL or everything, nor into putting all your data into relational schemas, and that flexibility can be a huge relief.
  • Even so, almost everybody who uses those uses some SQL, at least for initial data extraction. Those systems are also plenty good enough at SQL for joining data to reference tables, and all that other SQL stuff you’d never want to give up.

But suppose you just want to do business intelligence, which is still almost always done over relational data structures? Analytic RDBMS offer the trade-offs:

  • They generally still provide the best performance or performance/concurrency combination, for the cost, although YMMV (Your Mileage May Vary).
  • One has to load the data in and immediately structure it relationally, which can be an annoying contrast to Hadoop alternatives (data base administration can be just-in-time) or to OLTP integration (less or no re-loading).
  • Other integrations, as noted above, can also be weak.

Suppose all that is a good match for your situation. Then you should surely continue using an analytic RDBMS, if you already have one, and perhaps even acquire one if you don’t. But for many other use cases, analytic RDBMS are no longer the best way to go.

Finally, how does the cloud affect all this? Mainly, it brings one more analytic RDBMS competitor into the mix, namely Amazon Redshift. Redshift is a simple system for doing analytic SQL over data that was in or headed to the Amazon cloud anyway. It seems to be quite successful.

Bottom line: Analytic RDBMS are no longer in their youthful prime, but they are healthy contributors in middle age. Mainly, they’re still best-of-breed for supporting demanding BI.

Pages

Subscribe to Oracle FAQ aggregator