I have had an interesting few interactions over the last week or so regarding data supposedly leaked from my website. This is interesting from two perspectives. The first is that three people emailed me and told me that my website....[Read More]
Posted by Pete On 10/08/16 At 10:23 AM
JET application (download release package with sample code - release_jet_mcs_v1.zip) renders bar chart with data retrieved from MCS endpoint returning information about employees:
Once data is retrieved, I can see invocation statistics logged and reported in MCS dashboard. Calls are executed successfully:
To call MCS service from Oracle JET, we need to pass to extra headers - Authorization and Oracle-Mobile-Backend-ID. Values for both headers can be obtained from MCS dashboard. Go to Mobile Backend and navigate to Settings section. You will find required info under HTTP Basic section:
To bypass CORS issue, you can specify Security_AllowOrigin property in MCS (you need to be admin for this). Read more about it in MCS Developer Guide - Environment Policies and Their Values. Download property file to your local environment, change it and upload back:
For test purposes, I have specified Security_AllowOrigin=allow:
Oracle JET application JS module is pointing to REST URL handled by MCS:
Fetch operation is configured to pass two headers specific to MCS - this will allow to complete REST request and get response:
NetBeans tool Network Monitor displays request headers (both MCS headers are included):
If request is successful, you should see data returned in the response:
If we look at the layer communication (Figure 2.2 from VPN Illustrated) we see the different layers of the OSI layer represented
Today we are going to be talking about the Transport Layer or Layer 3. An example of an application would be a web browser communicating to a web server. The web browser would connect to the ip address of the web server and make an http request for a file. The http request is an example of an application layer request. At the TCP layer we have to define the handshake mechanism to request and receive the file as well as the port used for the request. Ports are a new concept where we not only talk to the ip address of a server but we specifically talk to it through a specific protocol that the server has a listener ready and available for requests. In our web browser example we read clear text web pages on port 80 typically and secure web pages on port 443. The secure web page not only can accept a file download request but does it without anyone else on the network knowing what is being asked because the communication between the web browser and web server is encrypted and encoded to prevent anyone from snooping traffic that is being exchanged. This is needed if you want to transmit secure information like credit card numbers, social security numbers, or any other financial related keys that assist in doing commerce across the internet.
- ifconfig on Linux allow you to control the status of an internet connection
- ipconfig on Windows is a similar tool
We just mentioned a new term here, a firewall. A firewall is a program that runs on a server and either allows traffic through or disables traffic on a specific port. For example, if we want to allow anyone on our subnet access to our web page, we open up port 80 to the same subnet that we are on. If our corporate subnet consists of more than just one subnet we might want to define an ip address range that we want to accept requests from. A firewall takes these connection requests at the TCP layer and opens up the TCP header, inspects it looking at the source and destination address as well as the port that is used for communications. If the port is open and allowing traffic from a subnet or ip address range, the request is then passed to the web server software. If the port is open but the traffic is coming outside of the ip address range, the request is dropped and an error is returned or the tcp/ip packet is dropped based on our firewall rules. The same is true for all ports that attach to compute engines on the internet. By default most cloud vendors open up port 22 which is the ssh port that allows you to connect to a command line on a Linux or Unix server. Microsoft Azure typically opens up port 3389 which is the remote desktop connection port. This allows you to connect to a Windows desktop using the RDP application on Windows desktops. It is typically a good idea to restrict the ip address that you can connect to your compute cloud server from an ip address rather than from any address.
We could consider a router to be an implementation of a firewall. A router between your subnet and the corporate network would be a wide open firewall. It might not pass UDP headers and most likely does not pass multicast broadcasts. It will not typically pass non routable addresses that we talked about yesterday. If we have a 192.168.1.xxx address we typically don't route this outside of our local network by definition since these are local private addresses. A router can block specific addresses and ports by design and act like a firewall. For example, Oracle does not allow ftp access from inside of the corporate network to outside servers. The ftp protocol transmits user names and passwords in the clear which means that anyone using tools like tcpdump, ettercap, and ethereal can capture and display the passwords. There are more secure programs like sftp that performs the same function but not only encrypts the username and password but each data byte transmitted to and from the server.
Many routers like wifi routers that most people have in their homes allow for network address translation (NAT) so that you are not presenting the 192.168.1.xxx address to the public internet but the address of your router/modem that connects you to the internet. Your desktop computer is at address 192.168.1.100, for example, but resolves to 18.104.22.168 with your internet provider. When you connect to port 80 at address 22.214.171.124 which correlates to http://cnn.com you connect to port 80 with a TCP/IP header source address of 126.96.36.199 and a destination address of 188.8.131.52. When your router gets a response back it knows that it needs to forward the response to 192.168.1.100 because you connected with a header value that said you were using a NAT connection. The router bridges this information back to you so that you don't need to consume more ip addresses on the internet for each device that you connect with from your home. The router/modem translates these requests using NAT, Bridge links, or actual IP addresses if you configure your back end server to request a direct mapping.
If we put all of this together along with the route command on Windows or Linux, we can define a default router that will take our IP packets and forward them to the right path. It might take a hop or two to get us to our eventual destination but we should be able to use something like Figure 2.9 from VPN Illustrated to represent out access to the Internet and use tool like traceroute to look at the hops and hop cost for us to get to the different cloud servers.
Note in this diagram if we are on Host 4 we set our default router to be router 2. We then trust that router 2 will know how to get to router 1 and router 1 will take us to our desire to look at cnn.com or whatever web site we are trying to connect to. All cloud vendors provide a default router configuration. All cloud vendors will give you a way of connecting to the internet. All cloud vendors will give you a way of configuring a firewall and subnet definitions. We might want to create a database server that does not have an internet connection and we need to connect to our application server through ssh then ssh into our database server through a private network. We might not have a public internet connection for our database but hide it in a subnet to keep it secure. In our routing map from VPN Illustrated we might want to put our database on host 4 and disable any connection to the internet. We might only want to allow traffic from the 200.10.4.xxx network to connect to the database. We might want to allow ssh, port 80, and port 443 connection to host 1 and allow only host 1 to connect ssh to host 4. All could vendors allow you to do this and configure virtual networks, subnets, firewalls, and netmasks.
We recommend that you get an IaaS account on AWS, Azure, and Oracle IaaS and play. See what works. See what you can configure from the command line. See what requires console configuration and your options when you provision a new operating system. See what you can automate with Orchestration scripts or python scripts or chef/puppet configurations. Automation is the key to a successful deployment of a service. If something breaks it is important to be able to automate restarting, sizing up, and sizing down services and this begins at the compute layer. It is also important to see if you can find a language or platform that allows you to change from one cloud vendor to another. Vendor lock in at this level can cause you to stick with a vendor despite price increases. Going with something like bitnami allows you to select which vendor is cheapest, has the best network speeds and options, has the fastest chips and servers, as well as the best SLAs and uptime history.
We didn't dive much into UDP. The key difference between TCP and UDP is the acknowledgement process when a packet is sent. TCP is a stateful transmission. When a web request is asked for by a browser the client computer sends a TCP/IP packet. The web server responds that it got the request and sends an acknowledgement packet that it received the request. The web server then takes the file that was requested, typically something like index.html, and sends it back in another TCP/IP packet. The web browser responds that it received the file with an acknowledgement packet. This is done because at times the Internet gets busy and there is a chance for collision of packets and the packet might never get delivered to the destination address. If this happens and the sender does not receive an acknowledgement it resends the request again. With a UDP packet the handshake does not happen. The sender sends out a packet and assumes that it was received. If there was a collision and the packet got dropped it is never retransmitted. Applications like Internet Radio and Skype use this type of protocol because you don't need a retransmission of audio signals if the time to listen to it has passed. The packet is dropped and the audio is skipped and picked up at the next packet transmitted. Most cloud vendors support UDP routing and transmission. This is optional and typically a firewall configuration. It might or might not make sense for a database to send and receive using UDP so it might not be an option when you get the Platform as a Service. Most Infrastructure as a Service vendors provide configuration tools to allow or block UDP.
In summary, we have covered basic addressing, routine, firewalls, and touched briefly on the TCP and UDP headers. We don't really need to get into the depths of TCP and how packets are transmitted, how congestion is handled, and how collisions are compensated for. In a cloud vendor you typically need to ask if the network is oversubscribed or bandwidth limited. You also need to ask if you have configuration limitations and restrictions on what you can and can not transmit. One of the risks to an unlimited network is noisy neighbor and getting congestion from another virtual machine that you are provisioned with. On the other hand if your network is oversubscribed you have to be bandwidth limited and accessing your storage can limit your application speed. Our advice is know your application, know if you are network limited, and know the security model and network configuration that you want ahead of time. Every cloud vendor differentiates their services but few offer service level agreements on bandwidth and compute resources. Read the fine details and play with all options.
A Guest Post by Jennifer Toomey, Sr. Principal Product Marketing Director, Oracle
Oracle OpenWorld San Francisco (September 18 - 22, 2016) offers more Oracle EPM content and expert experience than any other conference in the world. It features more than 60 EPM conference sessions, a dedicated EPM Showcase area for demos, three hands-on labs, plus presentations from multiple customers, partners, and Oracle staff.
Whether you already have, or are considering, an Oracle EPM solution, Oracle OpenWorld is the place to be.
Cloud: With multiple new cloud offerings released this year, EPM Cloud is in the spotlight with sessions featuring Oracle EPM Cloud customers, products, and strategy, as well as roadmap. Attendees will have the opportunity to get a closer look at these new cloud solutions. In addition, a number of customers will share their experiences and results from using Oracle EPM Cloud.
On Premises: There will also be sessions covering on-premises Oracle EPM products, including customers’ case studies, what’s new, and what’s coming.
EPM Showcase: This year we have created an area specifically for EPM that will be located on the second floor of Moscone West. Attendees can take advantage of this opportunity to meet with EPM prospects and customers in a central location that integrates conference sponsors, exhibitors, and demos.
- Oracle EPM General Session with Deloitte: Executive Briefing on Oracle’s EPM Strategy and Roadmap [GEN6336]
- Customers Present: Oracle EPM and ERP Cloud Together [CON7514]
- Customers Present: EPM Cloud for Midsize Customers [CON7515]
- Application Integration: EPM, ERP, Cloud and On-premises - All the Options Explained [CON7497]
- Product Development Panel Q&A: Oracle Hyperion EPM Applications [CON7538]
Customers: Watch for these customers who are speaking: Barnes & Noble, Harvard Medical Faculty Physicians, Mattel, CIMA, JC Penney, Virginia Commonwealth University, Babcock & Wilcox, Lexington-Fayette Urban County, Meredith Corporation, SNC-Lavalin, and many more.
Have fun and learn at Oracle OpenWorld 2016. We look forward to seeing you in San Francisco!
Yes, Host Aggregate I/O Queue Depth is Important. But Why Overdo When Using All-Flash Array Technology? Complexity is Sometimes a Choice.
I recently updated the EMC best practices guide for Oracle Database on XtremIO. One of the topics in that document is how many host LUNs (mapped to XtremIO storage array volumes) should administrators use for each ASM disk group. While performing the testing for the best practices guide it dawned on me that this topic is suitable for a blog post. I think too many DBAs are still using the ASM disk group methodology that made sense with mechanical storage. With All Flash Arrays–like XtremIO–administrators can rethink the complexities of they way they’ve always done it–as the adage goes.
Before reading the remainder of the post, please be aware that this is the first installment in a short series about host LUN count and ASM disk groups in all-flash environments. Future posts will explore more additional reasons simple ASM disk groups in all-flash environments makes a lot of sense.How Many Host LUNs are Needed With All Flash Array Technology
We’ve all come to accept the fact that–in general–mechanical storage offers higher latency than solid state storage (e.g., All Flash Array). Higher latency storage requires more aggregate host I/O queue depth in order to sustain high IOPS. The longer I/O takes to complete the longer requests have to linger in a queue.
With mechanical storage it is not at all uncommon to construct an ASM disk group with over 100 (or hundreds of) ASM disks. That may not sound too complex to the lay person, but that’s only a single ASM disk group on a single host. The math gets troublesome quite quickly with multiple hosts attached to an array.
So why are DBAs creating ASM disk groups consisting of vast numbers of host LUNs after they adopt all-flash technology? Well, generally it’s because that’s how it’s has always been done in their environment. However, there is no technical reason to assemble complex, larger disk-count ASM disk groups with storage like XtremIO. With All Flash Array technology latencies are an order of magnitude (or more) shorter duration than mechanical storage. Driving even large IOPS rates is possible with very few host LUNs in these environments because the latencies are low. To put it another way:
With All Flash Array technology host LUN count is strictly a product of how many IOPS your application demands
Lower I/O latency allows administrators to create ASM disk groups of very low numbers of ASM disks. Fewer ASM disks means fewer block devices. Fewer block devices means a more simplistic physical storage layout and simplistic is always better–especially in modern, complex IT environments.Case Study
In order to illustrate the relationship between concurrent I/O and host I/O queue depth, I conducted a series of tests that I’ll share in the remainder of this blog post.
The testing consisted of varying the number of ASM disks in a disk group from 1 to 16 host LUNs mapped to XtremIO volumes. SLOB was executed with varying numbers of zero-think time sessions from 80 to 480 and the slob.conf->UPDATE_PCT to values 0 and 20. The SLOB scale was 1TB and I used SLOB Single-Schema Model. The array was a 4 X-Brick XtremIO array connected to a single 2s36c72t Xeon server running single-instance Oracle Database 12c and Linux 7. The default Oracle Database block size (8KB) was used.
Please note: Read Latencies in the graphics below are db file sequential read wait event averages taken from AWR reports and therefore reflect host I/O queueing time. The array-level service times are not visible in these graphics. However, one can intuit such values by observing the db file sequential read latency improvements when host I/O queue depth increases. That is, when host queueing is minimized the true service times of the array are more evident.Test Configuration HBA Information
The host was configured with 8 Emulex LightPulse 8GFC HBA ports. HBA queue depth was configured in accordance with the XtremIO Storage Array Host Configuration Guide thus lpfc_lun_queue_depth=30 and lpfc_hba_queue_depth=8192.Test Configuration LUN Sizes
All ASM disks in the testing were 1TB. This means that the 1-LUN test had 1TB of total capacity for the datafiles and redo logs. Conversely, the 16-LUN test had 16TB capacity. Since the SLOB scale was 1TB readers might ponder how 1TB of SLOB data and redo logs can fit in 1TB. XtremIO is a storage array that has always-on, inline data reduction services including compression and deduplication. Oracle data blocks cannot be deduplicated. In the testing it was the XtremIO array-level compression that allowed 1TB scale SLOB to be tested in a single 1TB LUN mapped to a 1TB XtremIO volume.Read-Only Baseline
Figure 1 shows the results of the read-only workload (slob.conf->UPDATE_PCT=0). As the chart shows, Oracle database is able to perform 174,490 read IOPS (8KB) with average service times of 434 microseconds with only a single ASM disk (host LUN) in the ASM disk group. This I/O rate was achieved with 160 concurrent Oracle sessions. However, when the session count increased from 160 to 320, the single LUN results show evidence of deep queueing. Although the XtremIO array service times remained constant (detail that cannot be seen in the chart), the limited aggregate I/O queue depth caused the db file sequential read waits at 320, 400 and 480 sessions to increase to 1882us, 2344us and 2767us respectively. Since queueing causes the total I/O wait time to increase, adding sessions does not increase IOPS.
As seen in the 2 LUN group (Figure 1), adding an XtremIO volume (host LUN) to the ASM disk group had the effect of nearly doubling read IOPS in the 160 session test but, once again, deep queueing started to occur in the 320 session case and thus db file sequential read waits approached 1 millisecond—albeit at over 300,000 IOPS. Beyond that point the 2 LUN case showed increasing latency and thus no improvement in read IOPS.
Figure 1 also shows that from 4 LUNs through 16 LUNs latencies remained below 1 millisecond even as read IOPS approached the 520,000 level. With the information in Figure 1, administrators can see that host LUN count in an XtremIO environment is actually determined by how many IOPS your application demands. With mechanical storage administrators were forced to assemble large numbers of host LUNs for ASM disks to accommodate high storage service times. This is not the case with XtremIO.Read / Write Test Results
Figure 2 shows measured IOPS and service times based on the slob.conf->UPDATE_PCT=20 testing. The IOPS values shown in Figure 2 are the combined foreground and background process read and write IOPS. The I/O ratio was very close to 80:20 (read:write) at the physical I/O level. As was the case in the 100% SELECT workload testing, the 20% UPDATE testing was also conducted with varying Oracle Database session counts and host LUN counts. Each host LUN mapped to an XtremIO volume.
Even with moderate SQL UPDATE workloads, the top Oracle wait event will generally be db file sequential read when the active data set is vastly larger than the SGA block buffer pool—as was the case in this testing. As such, the key performance indicator shown in the chart is db file sequential read.
As was the case in the read-only testing, this series of tests also shows that significant amounts of database physical I/O can be serviced with low latency even when a single host LUN is mapped to a single XtremIO volume. Consider, for example, the 160 session count test with a single LUN where 130,489 IOPS were serviced with db file sequential read wait events serviced in 754 microseconds on average. The positive effect of doubling host aggregate I/O queue depth can be seen in Figure 2 in the 2 LUN portion of the graphic. With only 2 host LUNs the same 160 Oracle Database sessions were able to process 202,931 mixed IOPS with service times of 542 microseconds. The service time decrease from 754 to 542 microseconds demonstrates how removing host queueing allows the database to enjoy the true service times of the array—even when IOPS nearly doubled.
With the data provided in Figures 1 and 2, administrators can see that it is safe to configure ASM disk groups with very few host LUNs mapped to XtremIO storage array making for a simpler deployment. Only those databases demanding significant IOPS need to be created in ASM disk groups with large numbers of host LUNs.
Figure 3 shows a table summarizing the test results. I invite readers to look across their entire IT environment and find their ASM disk groups that sustain IOPS that require even more than a single host LUN in an XtremIO environment. Doing so will help readers see how much simpler their environment could be in an all-flash array environment.Summary
Everything we know in IT has a shelf-life. Sometimes the way we’ve always done things is no longer the best approach. In the case of deriving ASM disk groups from vast numbers of host LUNs, I’d say All-Flash Array technology like XtremIO should have us rethinking why we retain old, complex ways of doing things.
This post is the first installment in short series on ASM disk groups in all flash environments. The next installment will show readers why low host LUN counts can even make adding space to an ASM disk group much, much simpler.
Filed under: oracle
With the rise of the digital world — web, mobile, social and cloud technologies have changed people’s expectations of how they engage with each other and how work gets done. For most organizations, it’s not a matter of “if” they will migrate to the cloud, it’s “when”.
Join CMSWire with Craig Wentworth, Principal Analyst at MWD Advisors, and David Le Strat, Senior Director of Product Management at Oracle, for a one-hour webinar on how you can leverage your current IT investments as you modernize your applications infrastructure to embrace new digital imperatives to meet customer, partner and employee experiences.
Wed, Aug 24 at 10am PT/ 1pm ET/ 7pm CET
This webinar will cover:
- How to overcome common challenges and hurdles of cloud adoption
- Key trends in embracing cloud, content and experience management solutions
- How to leverage your existing investments while still reaping the benefits of the cloud
Bonus: Webinar attendees have a chance to win a free pass to CMSWire'sDX Summit 2016, November 14 - 16, in Chicago (a value of $1295). The winner will be announced at the end of the live Q&A.