- Wednesday, August 31, 2016 08:00 AM (US Pacific Time)
- Wednesday, August 31, 2016 11:00 AM (US Eastern Time)
- Wednesday, August 31, 2016 05:00 PM (Central European Time)
- Wednesday, August 31, 2016 08:30 PM (India Standard Time)
This one hour session is recommended for technical and functional users who use WebCenter Content. This session focuses on new features of WebCenter Content 12c and Documents Cloud Service. It also includes a discussion on running WebCenter Content on the Compute Cloud Service.
- New features in the 18.104.22.168.0 release
- New features in Documents Cloud Service
- Running WebCenter Content on the Compute Cloud Service
Duration: 1 hr
Current Schedule and Archived Downloads can be found in Note 740966.1
Topic: New Features for WebCenter Content 12c On Premise and Oracle Cloud Services
Event Number: 598 713 241
Event Passcode: 909090
Register for this Advisor Webcast: https://oracleaw.webex.com/oracleaw/onstage/g.php?d=598713241&t=a
Once the host approves your request, you will receive a confirmation email with instructions for joining the meeting.
InterCall Audio Instructions
A list of Toll-Free Numbers can be found below.
- Participant US/Canada Dial-in #: 1-866-230-1938
- International Toll-Free Numbers
- Alternate International Dial-In #: +44 1452 562 665
- Conference ID: 53697714
If we log into our compute console we can see a list of instances that exist for an account. In our example we have four servers defined. One is Oracle Linux, one is CentOS7, one is a database server, and the fourth is a WebLogic server. If we click on the Network tab we see the Security Rules, Security Lists, Security Applicaitons, and Security IP Lists.
It is important to realize that Oracle takes a different approach when provisioning servers. The server is first provisioned with only SSH or RDP as the default rule or a security list that you create. In this example we see four different lists. The bitnami-moodle definition on the list opens up port 80 and 443 for a web server. The database definition prs12cHP opens up port 1521. The prsJava definitions open up administration ports as well as ports 80 and 443. The default security list only opens up port 22 for ssh connections.
If we look at the default security list the default operation is to deny all inbound traffic for computers not in the security list and drop the packets with no reply. We could configure reject with reply but this might lead to a denial of service attack with someone constantly sending TCP/IP requests to our server just to overload the server and network with TCP ack packets. By default the configuration is to drop packets and this typically happens at the border gateway rather than at our compute server. The outbound definition gives you the option of allowing packets, rejecting packets with an ack, and dropping packets with no ack. It is important to communicate to your users how you configure your server. If you configure outbound for deny with no reply they might end up troubleshooting network connection issues when it is by design dropping packets and it is not a router or connection issue.
Note that the concept of security list is a little misleading. For all of our instances we have an inbound policy of deny and an outbound policy of permit. Why not go with one security list and map all instances to this configuration? The key is in the security rules definition. We create a definition of a rule that maps security applications to a source and destination. By application we really mean a port number for an application. The source is where the packet is coming from and the destination is where the packet is going to. Since we have a permit all outbound traffic we only need to define the exceptions to the rule for inbound traffic. If, for example, we defined a deny inbound and deny outbound we would need to define the exception for both directions. If you look at the security rule definitions we are defining the source as the public-internet and the destination as each of our servers.
Security rules are essentially firewall rules. This permits traffic from your compute instance and can be used in different security lists as well as specific definitions between instances and external hosts. Yesterday we talked about turning off public ssh for a database server and only allowing ssh into the database server from our Java server. We would do this by turning off public-internet access over port 22 into the database server and allowing port 22 from our Java server to our database server. To access the database we would have to have public access of port 22 into the Java server, require the user to log in to the command line then ssh across to the database server using port 22 from the Java server to the database server. With this we can hide our database instance from the public internet but still allow access to the console to manage it. We will need to define an outbound rule that allows the database server to reach out and pull down patches if we want or require staging patches from the Java server to the database server by turning off all outbound traffic and only allowing port 1521 to and from the Java server.
Note that we create a rule association by defining the security application and associating it with a source and destination. When we create a security rule we define if it is enable or disable as well as the port or port ranges that we want to open. We can identify the source either with a security list or specific ip lists. If we go with a Security IP List we can define a specific instance, a subnet (site), or the public internet. We can do the same for the destination and specify a security list or specific ip lists. This effectively creates a virtual software defined network that maps packet routing to and from your instance.
If we look at the moodle server that we have running, for example, we have three security applications open. The first is ssh which allows us to connect to a shell and configure services. The second is http which maps to port 80 if we look at the Security Applications. The third is https which maps to port 443. These three ports are the only ports that are open and they are open to the public-internet as the source. We have a permit outbound rule so that the moodle server can pull in images from our storage servers, get updates with command line tools from other web servers, and download updates to the moodle server as needed from bitnami. We could just as easily have set the outbound policy to deny and only allow http, https, and ssh connections to this server inbound and outbound.
Note that this process and procedure is very similar to the way that Amazon AWS and Microsoft Azure define network rules. With AWS you go through the VPC Dashboard and define a VPN Connection. You create Security Groups that defines the ports and access rights. For example the bch-shared-firewall-34877 opens up ports 22, 80, and 443 to the public internet. The source of 0.0.0.0/0 is associated with the public internet. Note that we also have another rule that maps us to the 22.214.171.124 server for management. Once we define the inbound rules we can associate it with a VPN connection or gateway and define the inbound and outbound rules as we do on the Oracle Compute Cloud.
Azure does something similar and allows you to define ports or sets of ports when you create the instance. Note that TCP and UDP are the protocols that are allowed. This tends to imply the ICMP and other protocols are restricted in the Microsoft network. This typically is not a big deal but does have implications on how and what you can deploy in the Microsoft network. Amazon appears to allow ICMP as a rule definition as well as Oracle.
In summary, it appears that all three cloud vendors provide basic inbound and outbound rules. Microsoft limits the protocols to TCP and UDP and does not allow ICMP rules. This might or might not matter when selecting a cloud vendor. Once you have the rules defined you effectively have a secure system and flexibility to define subnets, netmasks, router tables, and layers of security with software defined networks. All three vendors appear to address this basic networking issue the same with one small difference with Azure. Now that we know how to configure networks it might be important to talk about speed, blocking, and throttling of networks. More tomorrow.
- Sponsored: 64% off Code Black Drone with HD Camera
Our #1 Best-Selling Drone--Meet the Dark Night of the Sky!
I have had an interesting few interactions over the last week or so regarding data supposedly leaked from my website. This is interesting from two perspectives. The first is that three people emailed me and told me that my website....[Read More]
Posted by Pete On 10/08/16 At 10:23 AM
JET application (download release package with sample code - release_jet_mcs_v1.zip) renders bar chart with data retrieved from MCS endpoint returning information about employees:
Once data is retrieved, I can see invocation statistics logged and reported in MCS dashboard. Calls are executed successfully:
To call MCS service from Oracle JET, we need to pass to extra headers - Authorization and Oracle-Mobile-Backend-ID. Values for both headers can be obtained from MCS dashboard. Go to Mobile Backend and navigate to Settings section. You will find required info under HTTP Basic section:
To bypass CORS issue, you can specify Security_AllowOrigin property in MCS (you need to be admin for this). Read more about it in MCS Developer Guide - Environment Policies and Their Values. Download property file to your local environment, change it and upload back:
For test purposes, I have specified Security_AllowOrigin=allow:
Oracle JET application JS module is pointing to REST URL handled by MCS:
Fetch operation is configured to pass two headers specific to MCS - this will allow to complete REST request and get response:
NetBeans tool Network Monitor displays request headers (both MCS headers are included):
If request is successful, you should see data returned in the response:
If we look at the layer communication (Figure 2.2 from VPN Illustrated) we see the different layers of the OSI layer represented
Today we are going to be talking about the Transport Layer or Layer 3. An example of an application would be a web browser communicating to a web server. The web browser would connect to the ip address of the web server and make an http request for a file. The http request is an example of an application layer request. At the TCP layer we have to define the handshake mechanism to request and receive the file as well as the port used for the request. Ports are a new concept where we not only talk to the ip address of a server but we specifically talk to it through a specific protocol that the server has a listener ready and available for requests. In our web browser example we read clear text web pages on port 80 typically and secure web pages on port 443. The secure web page not only can accept a file download request but does it without anyone else on the network knowing what is being asked because the communication between the web browser and web server is encrypted and encoded to prevent anyone from snooping traffic that is being exchanged. This is needed if you want to transmit secure information like credit card numbers, social security numbers, or any other financial related keys that assist in doing commerce across the internet.
- ifconfig on Linux allow you to control the status of an internet connection
- ipconfig on Windows is a similar tool
We just mentioned a new term here, a firewall. A firewall is a program that runs on a server and either allows traffic through or disables traffic on a specific port. For example, if we want to allow anyone on our subnet access to our web page, we open up port 80 to the same subnet that we are on. If our corporate subnet consists of more than just one subnet we might want to define an ip address range that we want to accept requests from. A firewall takes these connection requests at the TCP layer and opens up the TCP header, inspects it looking at the source and destination address as well as the port that is used for communications. If the port is open and allowing traffic from a subnet or ip address range, the request is then passed to the web server software. If the port is open but the traffic is coming outside of the ip address range, the request is dropped and an error is returned or the tcp/ip packet is dropped based on our firewall rules. The same is true for all ports that attach to compute engines on the internet. By default most cloud vendors open up port 22 which is the ssh port that allows you to connect to a command line on a Linux or Unix server. Microsoft Azure typically opens up port 3389 which is the remote desktop connection port. This allows you to connect to a Windows desktop using the RDP application on Windows desktops. It is typically a good idea to restrict the ip address that you can connect to your compute cloud server from an ip address rather than from any address.
We could consider a router to be an implementation of a firewall. A router between your subnet and the corporate network would be a wide open firewall. It might not pass UDP headers and most likely does not pass multicast broadcasts. It will not typically pass non routable addresses that we talked about yesterday. If we have a 192.168.1.xxx address we typically don't route this outside of our local network by definition since these are local private addresses. A router can block specific addresses and ports by design and act like a firewall. For example, Oracle does not allow ftp access from inside of the corporate network to outside servers. The ftp protocol transmits user names and passwords in the clear which means that anyone using tools like tcpdump, ettercap, and ethereal can capture and display the passwords. There are more secure programs like sftp that performs the same function but not only encrypts the username and password but each data byte transmitted to and from the server.
Many routers like wifi routers that most people have in their homes allow for network address translation (NAT) so that you are not presenting the 192.168.1.xxx address to the public internet but the address of your router/modem that connects you to the internet. Your desktop computer is at address 192.168.1.100, for example, but resolves to 126.96.36.199 with your internet provider. When you connect to port 80 at address 188.8.131.52 which correlates to http://cnn.com you connect to port 80 with a TCP/IP header source address of 184.108.40.206 and a destination address of 220.127.116.11. When your router gets a response back it knows that it needs to forward the response to 192.168.1.100 because you connected with a header value that said you were using a NAT connection. The router bridges this information back to you so that you don't need to consume more ip addresses on the internet for each device that you connect with from your home. The router/modem translates these requests using NAT, Bridge links, or actual IP addresses if you configure your back end server to request a direct mapping.
If we put all of this together along with the route command on Windows or Linux, we can define a default router that will take our IP packets and forward them to the right path. It might take a hop or two to get us to our eventual destination but we should be able to use something like Figure 2.9 from VPN Illustrated to represent out access to the Internet and use tool like traceroute to look at the hops and hop cost for us to get to the different cloud servers.
Note in this diagram if we are on Host 4 we set our default router to be router 2. We then trust that router 2 will know how to get to router 1 and router 1 will take us to our desire to look at cnn.com or whatever web site we are trying to connect to. All cloud vendors provide a default router configuration. All cloud vendors will give you a way of connecting to the internet. All cloud vendors will give you a way of configuring a firewall and subnet definitions. We might want to create a database server that does not have an internet connection and we need to connect to our application server through ssh then ssh into our database server through a private network. We might not have a public internet connection for our database but hide it in a subnet to keep it secure. In our routing map from VPN Illustrated we might want to put our database on host 4 and disable any connection to the internet. We might only want to allow traffic from the 200.10.4.xxx network to connect to the database. We might want to allow ssh, port 80, and port 443 connection to host 1 and allow only host 1 to connect ssh to host 4. All could vendors allow you to do this and configure virtual networks, subnets, firewalls, and netmasks.
We recommend that you get an IaaS account on AWS, Azure, and Oracle IaaS and play. See what works. See what you can configure from the command line. See what requires console configuration and your options when you provision a new operating system. See what you can automate with Orchestration scripts or python scripts or chef/puppet configurations. Automation is the key to a successful deployment of a service. If something breaks it is important to be able to automate restarting, sizing up, and sizing down services and this begins at the compute layer. It is also important to see if you can find a language or platform that allows you to change from one cloud vendor to another. Vendor lock in at this level can cause you to stick with a vendor despite price increases. Going with something like bitnami allows you to select which vendor is cheapest, has the best network speeds and options, has the fastest chips and servers, as well as the best SLAs and uptime history.
We didn't dive much into UDP. The key difference between TCP and UDP is the acknowledgement process when a packet is sent. TCP is a stateful transmission. When a web request is asked for by a browser the client computer sends a TCP/IP packet. The web server responds that it got the request and sends an acknowledgement packet that it received the request. The web server then takes the file that was requested, typically something like index.html, and sends it back in another TCP/IP packet. The web browser responds that it received the file with an acknowledgement packet. This is done because at times the Internet gets busy and there is a chance for collision of packets and the packet might never get delivered to the destination address. If this happens and the sender does not receive an acknowledgement it resends the request again. With a UDP packet the handshake does not happen. The sender sends out a packet and assumes that it was received. If there was a collision and the packet got dropped it is never retransmitted. Applications like Internet Radio and Skype use this type of protocol because you don't need a retransmission of audio signals if the time to listen to it has passed. The packet is dropped and the audio is skipped and picked up at the next packet transmitted. Most cloud vendors support UDP routing and transmission. This is optional and typically a firewall configuration. It might or might not make sense for a database to send and receive using UDP so it might not be an option when you get the Platform as a Service. Most Infrastructure as a Service vendors provide configuration tools to allow or block UDP.
In summary, we have covered basic addressing, routine, firewalls, and touched briefly on the TCP and UDP headers. We don't really need to get into the depths of TCP and how packets are transmitted, how congestion is handled, and how collisions are compensated for. In a cloud vendor you typically need to ask if the network is oversubscribed or bandwidth limited. You also need to ask if you have configuration limitations and restrictions on what you can and can not transmit. One of the risks to an unlimited network is noisy neighbor and getting congestion from another virtual machine that you are provisioned with. On the other hand if your network is oversubscribed you have to be bandwidth limited and accessing your storage can limit your application speed. Our advice is know your application, know if you are network limited, and know the security model and network configuration that you want ahead of time. Every cloud vendor differentiates their services but few offer service level agreements on bandwidth and compute resources. Read the fine details and play with all options.