Feed aggregator

When does uncommited data rollback from data file?

Tom Kyte - Tue, 2016-08-09 22:26
Dear Team, DBWR write flush data from db buffer cache at the time ? Checkpoint occurs ? Dirty buffers reach threshold ? There are no free buffers ? Timeout occurs ? Tablespace OFFLINE ? Tablespace READ ONLY ? Table DROP or TRUNCATE ...
Categories: DBA Blogs

Create a database trigger (Dynamic) on a table

Tom Kyte - Tue, 2016-08-09 22:26
Hi Tom, I have a requirement where I have to create a table level trigger dynamically. Here are more details : A user will be provided an option to choose the columns in a table XYZ which will be tracked for any changes - inserting, updati...
Categories: DBA Blogs

excessive log generation

Tom Kyte - Tue, 2016-08-09 22:26
hi - we have a 11.2.0.4 database in production that we saw was generating too many logs even when the database is dormant. We initially thought it was due to rman backups but now we are seeing that if we issue one log switch it is causing too many lo...
Categories: DBA Blogs

SMTP ACL gets dropped ocassionally

Tom Kyte - Tue, 2016-08-09 22:26
Hi, I have a schema which uses the ACL like below to send mails.The ACL gets dropped ocassionally and application using teh schema is unable to send the mails.We need to manually add the ACL again and everything gets back to normal.Can anyone expe...
Categories: DBA Blogs

Query Mysql database from oracle

Tom Kyte - Tue, 2016-08-09 22:26
Hi!, I am trying to query a mysql database from oracle. I have configured everything and everything is working fine but when i query any table of mysql database from oracle, i dont get the columns having number data type. In hs file i have follwing ...
Categories: DBA Blogs

ORACLE Query to return First Row for a DataSet

Tom Kyte - Tue, 2016-08-09 22:26
I'm writing a query which would return first row of a set of grouped data. I tried using OVER PARTITION BY clause, but somehow I'm not getting the desired result : select row_number() OVER(PARTITION BY leafv , value_group , l1d ,l2d ,l3d ,l4d ...
Categories: DBA Blogs

Mutating table

Tom Kyte - Tue, 2016-08-09 22:26
Hi i was supposed to make script that will find data in table and based on these insert into same table. When i was doing this operation while using temporary table to prevent mutating table error(it did not worked and i know my before insert trigge...
Categories: DBA Blogs

Oracle EPM Cloud in Focus at OpenWorld 2016

Linda Fishman Hoyle - Tue, 2016-08-09 12:52

A Guest Post by Jennifer Toomey, Sr. Principal Product Marketing Director, Oracle 

Oracle OpenWorld San Francisco (September 18 - 22, 2016) offers more Oracle EPM content and expert experience than any other conference in the world. It features more than 60 EPM conference sessions, a dedicated EPM Showcase area for demos, three hands-on labs, plus presentations from multiple customers, partners, and Oracle staff.

Whether you already have, or are considering, an Oracle EPM solution, Oracle OpenWorld is the place to be.

Cloud: With multiple new cloud offerings released this year, EPM Cloud is in the spotlight with sessions featuring Oracle EPM Cloud customers, products, and strategy, as well as roadmap. Attendees will have the opportunity to get a closer look at these new cloud solutions. In addition, a number of customers will share their experiences and results from using Oracle EPM Cloud.

On Premises: There will also be sessions covering on-premises Oracle EPM products, including customers’ case studies, what’s new, and what’s coming.

EPM Showcase: This year we have created an area specifically for EPM that will be located on the second floor of Moscone West. Attendees can take advantage of this opportunity to meet with EPM prospects and customers in a central location that integrates conference sponsors, exhibitors, and demos.

Recommended Sessions

  • Oracle EPM General Session with Deloitte: Executive Briefing on Oracle’s EPM Strategy and Roadmap [GEN6336]
  • Customers Present: Oracle EPM and ERP Cloud Together [CON7514]
  • Customers Present: EPM Cloud for Midsize Customers [CON7515]
  • Application Integration: EPM, ERP, Cloud and On-premises - All the Options Explained [CON7497]
  • Product Development Panel Q&A: Oracle Hyperion EPM Applications [CON7538]

Customers: Watch for these customers who are speaking: Barnes & Noble, Harvard Medical Faculty Physicians, Mattel, CIMA, JC Penney, Virginia Commonwealth University, Babcock & Wilcox, Lexington-Fayette Urban County, Meredith Corporation, SNC-Lavalin, and many more.

Get More Details: Use this link to find out everything about Oracle EPM and OpenWorld 2016. And, don’t forget the Customer Appreciation Event at AT&T Park on Wednesday evening, September 21.

Have fun and learn at Oracle OpenWorld 2016. We look forward to seeing you in San Francisco!

Yes, Host Aggregate I/O Queue Depth is Important. But Why Overdo When Using All-Flash Array Technology? Complexity is Sometimes a Choice.

Kevin Closson - Tue, 2016-08-09 10:51
That’s The Way We’ve Always Done It

I recently updated the EMC best practices guide for Oracle Database on XtremIO. One of the topics in that document is how many host LUNs (mapped to XtremIO storage array volumes) should administrators use for each ASM disk group. While performing the testing for the best practices guide it dawned on me that this topic is suitable for a blog post. I think too many DBAs are still using the ASM disk group methodology that made sense with mechanical storage. With All Flash Arrays–like XtremIO–administrators can rethink the complexities of they way they’ve always done it–as the adage goes.

Before reading the remainder of the post, please be aware that this is the first installment in a short series about host LUN count and ASM disk groups in all-flash environments. Future posts will explore more additional reasons simple ASM disk groups in all-flash environments makes a lot of sense.

How Many Host LUNs are Needed With All Flash Array Technology

We’ve all come to accept the fact that–in general–mechanical storage offers higher latency than solid state storage (e.g., All Flash Array). Higher latency storage requires more aggregate host I/O queue depth in order to sustain high IOPS. The longer I/O takes to complete the longer requests have to linger in a queue.

With mechanical storage it is not at all uncommon to construct an ASM disk group with over 100 (or hundreds of) ASM disks. That may not sound too complex to the lay person, but that’s only a single ASM disk group on a single host. The math gets troublesome quite quickly with multiple hosts attached to an array.

So why are DBAs creating ASM disk groups consisting of vast numbers of host LUNs after they adopt all-flash technology? Well, generally it’s because that’s how it’s has always been done in their environment. However, there is no technical reason to assemble complex, larger disk-count ASM disk groups with storage like XtremIO. With All Flash Array technology latencies are an order of magnitude (or more) shorter duration than mechanical storage. Driving even large IOPS rates is possible with very few host LUNs in these environments because the latencies are low. To put it another way:

With All Flash Array technology host LUN count is strictly a product of how many IOPS your application demands

Lower I/O latency allows administrators to create ASM disk groups of very low numbers of ASM disks. Fewer ASM disks means fewer block devices. Fewer block devices means a more simplistic physical storage layout and simplistic is always better–especially in modern, complex IT environments.

Case Study

In order to illustrate the relationship between concurrent I/O and host I/O queue depth, I conducted a series of tests that I’ll share in the remainder of this blog post.

The testing consisted of varying the number of ASM disks in a disk group from 1 to 16 host LUNs mapped to XtremIO volumes. SLOB was executed with varying numbers of zero-think time sessions from 80 to 480 and the slob.conf->UPDATE_PCT to values 0 and 20. The SLOB scale was 1TB and I used SLOB Single-Schema Model. The array was a 4 X-Brick XtremIO array connected to a single 2s36c72t Xeon server running single-instance Oracle Database 12c and Linux 7.  The default Oracle Database block size (8KB) was used.

Please note: Read Latencies in the graphics below are db file sequential read wait event averages taken from AWR reports and therefore reflect host I/O queueing time. The array-level service times are not visible in these graphics. However, one can intuit such values by observing the db file sequential read latency improvements when host I/O queue depth increases. That is, when host queueing is minimized the true service times of the array are more evident.

Test Configuration HBA Information

The host was configured with 8 Emulex LightPulse 8GFC HBA ports. HBA queue depth was configured in accordance with the XtremIO Storage Array Host Configuration Guide thus lpfc_lun_queue_depth=30 and lpfc_hba_queue_depth=8192.

Test Configuration LUN Sizes

All ASM disks in the testing were 1TB. This means that the 1-LUN test had 1TB of total capacity for the datafiles and redo logs. Conversely, the 16-LUN test had 16TB capacity.  Since the SLOB scale was 1TB readers might ponder how 1TB of SLOB data and redo logs can fit in 1TB. XtremIO is a storage array that has always-on, inline data reduction services including compression and deduplication. Oracle data blocks cannot be deduplicated. In the testing it was the XtremIO array-level compression that allowed 1TB scale SLOB to be tested in a single 1TB LUN mapped to a 1TB XtremIO volume.

Read-Only Baseline

Figure 1 shows the results of the read-only workload (slob.conf->UPDATE_PCT=0). As the chart shows, Oracle database is able to perform 174,490 read IOPS (8KB) with average service times of 434 microseconds with only a single ASM disk (host LUN) in the ASM disk group. This I/O rate was achieved with 160 concurrent Oracle sessions. However, when the session count increased from 160 to 320, the single LUN results show evidence of deep queueing. Although the XtremIO array service times remained constant (detail that cannot be seen in the chart), the limited aggregate I/O queue depth caused the db file sequential read waits at 320, 400 and 480 sessions to increase to 1882us, 2344us and 2767us respectively. Since queueing causes the total I/O wait time to increase, adding sessions does not increase IOPS.

As seen in the 2 LUN group (Figure 1), adding an XtremIO volume (host LUN) to the ASM disk group had the effect of nearly doubling read IOPS in the 160 session test but, once again, deep queueing started to occur in the 320 session case and thus db file sequential read waits approached 1 millisecond—albeit at over 300,000 IOPS. Beyond that point the 2 LUN case showed increasing latency and thus no improvement in read IOPS.

Figure 1 also shows that from 4 LUNs through 16 LUNs latencies remained below 1 millisecond even as read IOPS approached the 520,000 level. With the information in Figure 1, administrators can see that host LUN count in an XtremIO environment is actually determined by how many IOPS your application demands. With mechanical storage administrators were forced to assemble large numbers of host LUNs for ASM disks to accommodate high storage service times. This is not the case with XtremIO.

READS-GOLD

Figure 1

Read / Write Test Results

Figure 2 shows measured IOPS and service times based on the slob.conf->UPDATE_PCT=20 testing. The IOPS values shown in Figure 2 are the combined foreground and background process read and write IOPS. The I/O ratio was very close to 80:20 (read:write) at the physical I/O level. As was the case in the 100% SELECT workload testing, the 20% UPDATE testing was also conducted with varying Oracle Database session counts and host LUN counts. Each host LUN mapped to an XtremIO volume.

Even with moderate SQL UPDATE workloads, the top Oracle wait event will generally be db file sequential read when the active data set is vastly larger than the SGA block buffer pool—as was the case in this testing. As such, the key performance indicator shown in the chart is db file sequential read.

As was the case in the read-only testing, this series of tests also shows that significant amounts of database physical I/O can be serviced with low latency even when a single host LUN is mapped to a single XtremIO volume. Consider, for example, the 160 session count test with a single LUN where 130,489 IOPS were serviced with db file sequential read wait events serviced in 754 microseconds on average. The positive effect of doubling host aggregate I/O queue depth can be seen in Figure 2 in the 2 LUN portion of the graphic.  With only 2 host LUNs the same 160 Oracle Database sessions were able to process 202,931 mixed IOPS with service times of 542 microseconds. The service time decrease from 754 to 542 microseconds demonstrates how removing host queueing allows the database to enjoy the true service times of the array—even when IOPS nearly doubled.

With the data provided in Figures 1 and 2, administrators can see that it is safe to configure ASM disk groups with very few host LUNs mapped to XtremIO storage array making for a simpler deployment. Only those databases demanding significant IOPS need to be created in ASM disk groups with large numbers of host LUNs.

20PCT-GOLD

Figure 2

Figure 3 shows a table summarizing the test results. I invite readers to look across their entire IT environment and find their ASM disk groups that sustain IOPS that require even more than a single host LUN in an XtremIO environment. Doing so will help readers see how much simpler their environment could be in an all-flash array environment.

LUN-test-table

Figure 3

Summary

Everything we know in IT has a shelf-life. Sometimes the way we’ve always done things is no longer the best approach. In the case of deriving ASM disk groups from vast numbers of host LUNs, I’d say All-Flash Array technology like XtremIO should have us rethinking why we retain old, complex ways of doing things.

This post is the first installment in short series on ASM disk groups in all flash environments. The next installment will show readers why low host LUN counts can even make adding space to an ASM disk group much, much simpler.


Filed under: oracle

Webcast: The Road to Cloud: Digital Experience Best Practices

WebCenter Team - Tue, 2016-08-09 09:01
The Road to Cloud: Digital Experience Best Practices Prioritize your customer, partner and employee experiences

Digital Transformation

With the rise of the digital world — web, mobile, social and cloud technologies have changed people’s expectations of how they engage with each other and how work gets done. For most organizations, it’s not a matter of “if” they will migrate to the cloud, it’s “when”.

Join CMSWire with Craig Wentworth, Principal Analyst at MWD Advisors, and David Le Strat, Senior Director of Product Management at Oracle, for a one-hour webinar on how you can leverage your current IT investments as you modernize your applications infrastructure to embrace new digital imperatives to meet customer, partner and employee experiences.

AUG
24

Wed, Aug 24 at 10am PT/ 1pm ET/ 7pm CET

This webinar will cover:

  • How to overcome common challenges and hurdles of cloud adoption
  • Key trends in embracing cloud, content and experience management solutions
  • How to leverage your existing investments while still reaping the benefits of the cloud

Register Today!

Bonus: Webinar attendees have a chance to win a free pass to CMSWire'sDX Summit 2016, November 14 - 16, in Chicago (a value of $1295). The winner will be announced at the end of the live Q&A.

Migrating from smallfile tablespace to bigfile

Tom Kyte - Tue, 2016-08-09 04:06
hi - we recently migrated our 11.2.0.4 database from a non rac to a rac system, so the tablespaces came as smallfile tablespaces. this is our platform database and a 24x7 oltp system. what is the best way to move them into bigfile tablespaces with mi...
Categories: DBA Blogs

Script to compare data in all tables in two different databases, if the table is present in both the databases.

Tom Kyte - Tue, 2016-08-09 04:06
Hi, I am looking for a stored procedure to compare the data in all the tables in two different databases. I have 2 databases DB1 and DB2. From DB1, a Dblink is created to access DB2. First step is - to find all the tables that exists in...
Categories: DBA Blogs

Restrict Access on Active DG Primary DB

Tom Kyte - Tue, 2016-08-09 04:06
In our environment we have a production server which handles all reads/writes and we have an active DG which we use to offload backups as well as read only reporting on. We have already established a mechanism to handle password resets when an end u...
Categories: DBA Blogs

Number of Execution per snapshot too high for rman sql

Tom Kyte - Tue, 2016-08-09 04:06
Hi Team, I have been seeing many executions of below query in our database. begin sys.dbms_backup_restore.createRmanOutputRow( l0row_id => :l0row_id, l0row_stamp => :l0row_stamp, row_id => :row_id, row_stamp =>:row_stamp, txt=> :txt, sameline ...
Categories: DBA Blogs

Trigger to allow insertion only on Sunday

Tom Kyte - Tue, 2016-08-09 04:06
I have employee table. I want to create a trigger that will not allow insertion in the table on sunday.</b> Tell me the program please.. Thank you.
Categories: DBA Blogs

JSON from Relational Data

Tom Kyte - Tue, 2016-08-09 04:06
With all the new JSON Features, is there a way to take queries over relational data (ie. normal Oracle tables) and on the fly generate JSON objects? More and more vendors are using REST based apis that process JSON (a key one for us is Oracle Sales ...
Categories: DBA Blogs

Index Competition in #Oracle 12c

The Oracle Instructor - Tue, 2016-08-09 02:39

Suppose you want to find out which type of index is best for performance with your workload. Why not set up a competition and let the optimizer decide? The playground:

ADAM@pdb1 > select max(amount_sold) from sales where channel_id=9;

MAX(AMOUNT_SOLD)
----------------
            5000

ADAM@pdb1 > @lastplan

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------
SQL_ID  3hrvrf1r6kn8s, child number 0
-------------------------------------
select max(amount_sold) from sales where channel_id=9

Plan hash value: 3593230073

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |       |       |     4 (100)|          |
|   1 |  SORT AGGREGATE                      |       |     1 |     6 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| SALES |     1 |     6 |     4   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | BSTAR |     1 |       |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("CHANNEL_ID"=9)


20 rows selected.

There is a standard B*tree index on the column CHANNEL_ID that speeds up the SELECT above. I think a bitmap index would be better:

ADAM@pdb1 > create bitmap index bmap on sales(channel_id) invisible nologging;

Index created.

ADAM@pdb1 > alter index bstar invisible;

Index altered.

ADAM@pdb1 > alter index bmap visible;

Index altered.

ADAM@pdb1 > select max(amount_sold) from sales where channel_id=9;

MAX(AMOUNT_SOLD)
----------------
            5000

ADAM@pdb1 > @lastplan

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
select max(amount_sold) from sales where channel_id=9

Plan hash value: 2178022915

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE                      |       |     1 |     6 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| SALES |     1 |     6 |     3   (0)| 00:00:01 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |       |       |       |            |          |
|*  4 |     BITMAP INDEX SINGLE VALUE        | BMAP  |       |       |            |          |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("CHANNEL_ID"=9)


21 rows selected.

With this 12c New Feature (two indexes on the same column), I got a smooth transition to the new index type. But this left no choice to the optimizer. What about this?

ADAM@pdb1 > alter index bmap invisible;

Index altered.

ADAM@pdb1 > alter session set optimizer_use_invisible_indexes=true;

Now both indexes are invisible and the optimizer may choose any of them. Turns out that it likes the bitmap index better here. Instead of watching the execution plans, V$SEGMENT_STATISTICS can also be used to find out:

ADAM@pdb1 > select object_name,statistic_name,value
            from v$segment_statistics
            where object_name in ('BSTAR','BMAP')
            and statistic_name in ('physical reads','logical reads');

OBJECT STATISTIC_NAME                      VALUE
------ ------------------------------ ----------
BSTAR  logical reads                       22800
BSTAR  physical reads                       6212
BMAP   logical reads                        1696
BMAP   physical reads                          0

The numbers of BSTAR remain static while BMAP numbers increase. You may also monitor that with DBA_HIST_SEG_STAT across AWR snapshots. Now isn’t that cool?:-)
Couple of things to be aware of here:
Watch out for more than just physical/logical reads – bitmap indexes may cause a locking problem in an OLTP environment.
Don’t keep the two indexes invisible forever – after you saw which one performs better, drop the other one. Invisible indexes need to be maintained upon DML and therefore slow it down.


Tagged: 12c New Features, Performance Tuning
Categories: DBA Blogs

link layers 2 and 3

Pat Shuff - Tue, 2016-08-09 02:07
We are going through the OSI 7 layer stack and looking at the different layers. Yesterday we stared the discussion by looking at Kevin Fall and Richard Steven's book TCP/IP Illustrated Volume 1. In this book they describe the different layers and look at the how, what, and why of the design. Today we will focus on layers 2 and 3 the link layer and network layer.

Alternate sources of information about these layers can be found at

Layer 2 is basically a way of communicating between two neighbors. How many milliseconds a bit of data is kept on the wire, physical addressing, and aggregation of data packets are defined here. If you have ever wondered what a MAC Address is, this is where it is defined. Vendors are given a sequence of bits that indicate the address of a device that they create. Note that this is not your ip address but a physical sequence of bits as defined by the Institute of Electrical and Electronic Engineers (IEEE) 802 definition. The data packet consists of six octets of data with the first three octets identifying a corporation or manufacturer and the second three octets representing a unique sequence number of a device that the vendor manufactured. An example of this would be the MAC address on my MacBook Pro, 00:26:b0:da:c8:10. Apple is assigned 00:26:b0 as the identifier for their products. My specific laptop gets the identifier da:c8:10. When a data packet is placed on the internet through a hard wired cable or wifi it is placed there with the unique MAC Address of my laptop. When data was generated and consumed by physical hardware these addresses meant something. With virtualization and containers the MAC Address has become somewhat meaningless because these values are synthetic. You really can't determine if something came from an Apple product because we can map the above MAC address to a virtual machine by defining it as a parameter. It is best practice not to use the same MAC address is a physical network because all of the computers with that address will pick up the packet off the wire and decode it.

Layer 3 is the communication protocol that is used to create and define packets. With Apple for example, they defined a protocol called Appletalk so that you could talk between Apple computers and devices. This protocol did not really take off. Digital Computers did something similar with VAX/VMS and DecNET. This allowed their computers to talk to each other very efficiently and consume a network without regard for other computers on the network. Over the years the IP protocol has dominated. The protocol is currently in transition from IPv4 to IPv6 because the number of devices attached to the internet have exceeded the available addresses with the protocol. The IPv4 protocol consists of a dotted-quad or dotted-decimal notation with four fields that denote networks. For example, 129.152.168.100 is a valid ip address. All of the four fields can range from 0 to 255 with some of the values reserved. For example, 0.0.0.0 is not considered to be a valid address and neither is 255.255.255.255 because they are reserved for special functions. IPv6 uses a similar notation but addresses are denoted as eight blocks of 16 bit values. An example of this would be 5f05:2000:80ad:5800:58:800:2023:1d71. Note that this give us 128 bits rather than 32 bits to represent an address. IPv4 has 4,294,967,296 possible addresses in its address space, and IPv6 has 340,282,366,920,938,463,463,374,607,431,768,211,456.

With IPv4 addressing there is something called classes of networks. A class A network consists of a leading zero followed by seven bits to define a network and 24 bits to define a specific host. This is typically not used when talking about cloud services. A class B network consists of a leading 1 and 0 followed by 14 bits to define a network and 16 bits to define a host. Data centers typically use something like this because they could have thousands of servers in a data center. A class C network consists of a leading 110 followed by 21 bits to define the network and 8 bits to define a host. This allows 256 computers to be on one network which could be a department or office building. A class D network starts with 1110 and is considered to be a multicast broadcast. If something is written with this sequence, the packets are written to all hosts on the network. All hosts should but are not mandated to pick up this packet and look at the data element. A class E network starts with 1111 and is considered to be reserved and not to be used. The image from Chapter 2 of TCP/IP Illustrated Volume I shows the above visually.

This comes into play when someone talks about netmasks. If you are talking about a 0.0.0.0/16 it means that you are ignoring the leading 16 bits and looking at the remaining 16 bits to use for routing. You might also see 0.0.0.0/24 which means that you use the last 24 bits to route the data. If you set your netmask to be 255.255.255.0 it means that you are using a class B network with the first 16 bits defining the corporate network, the next 8 bits defining the subnet in the company, and the last 8 bits to define the specific host. This means that you can have 255 subnets in the company and 255 computers on each network. A netmask of 255.255.255.0 suggests that you are not going to route outside of your subnet if the first three octets are the same. What this means is that a router either passes the packets through or does not pass the data through based on the netmask and ip address of the destination.

You might hear the term CIDR (Classless inter-domain routing). This term refers to how to get to and from a host if there are multiple ways of traversing the network. We will not get into this but netmasks are good ways of limiting routing tables and spanning trees across networks. This is typically a phrase that you need to know about if you are looking at limiting communication and flow of addresses across a data center.

Earlier we talked about reserved networks and subnets. Some of the network definitions for IPv4 are defined as private and non-routable networks. A list of these addresses include

  • 0.0.0.0/8 Hosts on the local network. May be used only as a source IP address.
  • 10.0.0.0/8 Address for private networks (intranets). Such addresses never appear on the public Internet.
  • 127.0.0.0/8 Internet host loopback addresses (same computer). Typically only 127.0.0.1 is used.
  • 169.254.0.0/16 “Link-local” addresses—used only on a single link and generally assigned automatically.
  • 172.16.0.0/12 Address for private networks (intranets). Such addresses never appear on the public Internet.
  • 192.168.0.0/16 Address for private networks (intranets). Such addresses never appear on the public Internet.
  • 224.0.0.0/4 IPv4 multicast addresses (formerly class D); used only as destination addresses.
  • 240.0.0.0/4 Reserved space (formerly class E), except 255.255.255.255.
  • 255.255.255.255/32 Local network (limited) broadcast address.

Multicast addressing is supported by IPv4 and IPv6. An IP multicast address (also called group or group address) identifies a group of host interfaces, rather than a single one. Most cloud vendors don't allow for multicast and restrict use of communications to unicast from one server to another.

Some of the additional terms that come up are network address translation (NAT), border gateway router (BGP), and firewalls come up around networking discussions. We will defer these conversations to higher layer protocols because they involve more than just the ip address. BGP can be a simple definition that just drops ip addresses and does not pass them outside the corporate network independent of the netmask that the source host uses. If, for example, we want to stop someone from connecting to an ip address outside of our network and force it to go through a firewall or packet filter device a BGP can redirect all traffic through these devices or drop the packets.

In summary, we skimmed over routing. This is a complex subject. We mainly talked about layers 2 and 3 to introduce the terms MAC address, IP address, IPv4, and IPv6. We touched on CIDR and routing tables as well as reserved addresses and BGP and NAT. This is not a complete discussion on these subjects but an introduction of terms. Most cloud vendors do not support multicast or anycast broadcasts inside or outside of their cloud services. Most cloud vendors support IPv4 and IPv6 as well as subnet masking and multiple networks for servers and services. It is important to understand what a router is, how to configure a routing table, and the dangers of creating routing loops. We did not touch on hop count and hop cost because for most cloud implementations the topology is simple and servers inside a cloud implementation is rarely a hop or two away unless you are trying to create a highly available service in another data center, zone, or region. Up next, the data layer and the IP datagram.

Modernize Customer Engagement: Collaborative Marketing Asset Development

WebCenter Team - Mon, 2016-08-08 15:21

Content and feature rich engagement sites can help drive effective interactions with various groups such as customers, partners, and employees, leading to higher satisfaction and loyalty. With Oracle’s collaborative marketing asset development solution, business users with absolutely no website experience can rapidly assemble rich, interactive engagement microsites for marketing and communities. Microsites can be built on the fly with new content and also incorporate existing enterprise content, processes, and social applications all within a single integrated user interface.

Digitally empowered consumers can be your greatest advocates and your most loyal buying population. By engaging these customers—tracking their web activities, intuiting their needs, and recommending next steps in the buying cycle—marketing professionals can control the customer experience, from initial contact to long term loyalty. 

Most companies depend on several channels to interact with their customers, including email, Web, mobile, and social. Customer needs vary from one channel to the next. In addition, their expectations change at each stage of these relationships, from initial awareness through qualification, purchase, repeat purchases, and ongoing service. 

At a time when most interactions take place online, meeting customer needs and exceeding their expectations has become a tremendous technical challenge. Market-leading organizations succeed by establishing a versatile set of information systems for creating customer-facing content and sharing it with prospects via integrated marketing and awareness-building campaigns. They create engaging digital customer experiences that fulfill each customer’s expectations during each interaction and they understand the importance of facilitating and nurturing exceptional experiences. We invite you to read this solution brief to see how collaborative marketing asset development can help you meet customer expectations and effectively engage your customers, partners and employees.

Asahi Refining Selects Oracle Cloud to Improve Financial Visibility and Accelerate Business Growth

Oracle Press Releases - Mon, 2016-08-08 12:13
Press Release
Asahi Refining Selects Oracle Cloud to Improve Financial Visibility and Accelerate Business Growth Modern Finance Platform Enables Asahi Refining to Embrace Industry Change

Redwood Shores, Calif.—Aug 8, 2016

Asahi Refining, the world’s leading provider of precious metal assaying, refining, and bullion products, selected Oracle Cloud Applications and Oracle Cloud Platform to streamline its procurement and financial processes to get a more comprehensive and accurate picture of its financials to provide better visibility into the business.  By moving to the cloud, Asahi Refining has been able to shift its full attention to its core business of refining gold and silver and accelerate business growth.

The ongoing digitization of the refining industry means that organizations need an integrated financial platform to leverage data insights that can help evolve their business models and retain their competitive advantage. To address this market shift, Asahi Refining needed to overhaul its legacy enterprise resource planning (ERP) system, which was difficult to maintain, had limited reporting capabilities and contained fragmented data spread across various silos. The company needed a modern, integrated system to gain the insights needed for swift approvals and decision making.

“In order to update our outdated and over-extended IT infrastructure, we needed to move our financials to a centralized and secure environment,” said Kevin Braddy, IT director, Asahi Refining. “The Oracle ERP Cloud gives us real-time visibility into finance operations across the company and helps drive efficiencies across our financial processes. With this accurate financial information easily at hand, we are able to focus on growing our business.” 

Using the Oracle ERP Cloud and Oracle Cloud Platform, Asahi Refining was able to replace its legacy ERP environment with an integrated cloud-based financial system. Within three months, Asahi Refining was able to fully implement the solution and transition to Oracle Self-Service Procurement Cloud, Oracle Financials Cloud, and Oracle Purchasing Cloud.  The company now has a highly accurate, 360-degree view of its financial systems and operations. In addition, Asahi Refining was able to standardize reporting and reduce month-end reporting from a week to just three days, while increasing its efficiency in processing receivable transactions.

“We are happy to be working with Asahi Refining to help them transform their business with the Oracle Cloud,” said Amit Zavery, senior vice president, cloud platform and integration, Oracle.  “Moving from legacy systems to the cloud enabled Asahi Refining to modernize its technology systems, improving visibility into the business and ultimately accelerating growth and increasing efficiency.”

Asahi Refining used the Oracle Java Cloud and Oracle Database Cloud to seamlessly integrate its Oracle ERP Cloud applications with its legacy ERP system and third-party payroll applications, as well as to validate all data coming into the Oracle ERP Cloud from those legacy applications. Additionally, Asahi Refining has been able to lower its total cost-of-ownership by moving to the cloud, which the company can now leverage to realize additional business efficiencies in the future.

The Oracle Cloud runs in 19 data centers around the world and supports 70+ million users and more than 34 billion transactions each day. With the Oracle Cloud, Oracle delivers the industry’s broadest suite of enterprise-grade cloud services, including Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), and Data as a Service (DaaS).

 
Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Pages

Subscribe to Oracle FAQ aggregator