Alternate sources of information about these layers can be found at
- How Stuff Works - a great podcast in my opinion
- ietf.org Tutorial on Layers 2 and 3
Layer 3 is the communication protocol that is used to create and define packets. With Apple for example, they defined a protocol called Appletalk so that you could talk between Apple computers and devices. This protocol did not really take off. Digital Computers did something similar with VAX/VMS and DecNET. This allowed their computers to talk to each other very efficiently and consume a network without regard for other computers on the network. Over the years the IP protocol has dominated. The protocol is currently in transition from IPv4 to IPv6 because the number of devices attached to the internet have exceeded the available addresses with the protocol. The IPv4 protocol consists of a dotted-quad or dotted-decimal notation with four fields that denote networks. For example, 220.127.116.11 is a valid ip address. All of the four fields can range from 0 to 255 with some of the values reserved. For example, 0.0.0.0 is not considered to be a valid address and neither is 255.255.255.255 because they are reserved for special functions. IPv6 uses a similar notation but addresses are denoted as eight blocks of 16 bit values. An example of this would be 5f05:2000:80ad:5800:58:800:2023:1d71. Note that this give us 128 bits rather than 32 bits to represent an address. IPv4 has 4,294,967,296 possible addresses in its address space, and IPv6 has 340,282,366,920,938,463,463,374,607,431,768,211,456.
With IPv4 addressing there is something called classes of networks. A class A network consists of a leading zero followed by seven bits to define a network and 24 bits to define a specific host. This is typically not used when talking about cloud services. A class B network consists of a leading 1 and 0 followed by 14 bits to define a network and 16 bits to define a host. Data centers typically use something like this because they could have thousands of servers in a data center. A class C network consists of a leading 110 followed by 21 bits to define the network and 8 bits to define a host. This allows 256 computers to be on one network which could be a department or office building. A class D network starts with 1110 and is considered to be a multicast broadcast. If something is written with this sequence, the packets are written to all hosts on the network. All hosts should but are not mandated to pick up this packet and look at the data element. A class E network starts with 1111 and is considered to be reserved and not to be used. The image from Chapter 2 of TCP/IP Illustrated Volume I shows the above visually.
This comes into play when someone talks about netmasks. If you are talking about a 0.0.0.0/16 it means that you are ignoring the leading 16 bits and looking at the remaining 16 bits to use for routing. You might also see 0.0.0.0/24 which means that you use the last 24 bits to route the data. If you set your netmask to be 255.255.255.0 it means that you are using a class B network with the first 16 bits defining the corporate network, the next 8 bits defining the subnet in the company, and the last 8 bits to define the specific host. This means that you can have 255 subnets in the company and 255 computers on each network. A netmask of 255.255.255.0 suggests that you are not going to route outside of your subnet if the first three octets are the same. What this means is that a router either passes the packets through or does not pass the data through based on the netmask and ip address of the destination.
You might hear the term CIDR (Classless inter-domain routing). This term refers to how to get to and from a host if there are multiple ways of traversing the network. We will not get into this but netmasks are good ways of limiting routing tables and spanning trees across networks. This is typically a phrase that you need to know about if you are looking at limiting communication and flow of addresses across a data center.
Earlier we talked about reserved networks and subnets. Some of the network definitions for IPv4 are defined as private and non-routable networks. A list of these addresses include
- 0.0.0.0/8 Hosts on the local network. May be used only as a source IP address.
- 10.0.0.0/8 Address for private networks (intranets). Such addresses never appear on the public Internet.
- 127.0.0.0/8 Internet host loopback addresses (same computer). Typically only 127.0.0.1 is used.
- 169.254.0.0/16 “Link-local” addresses—used only on a single link and generally assigned automatically.
- 172.16.0.0/12 Address for private networks (intranets). Such addresses never appear on the public Internet.
- 192.168.0.0/16 Address for private networks (intranets). Such addresses never appear on the public Internet.
- 18.104.22.168/4 IPv4 multicast addresses (formerly class D); used only as destination addresses.
- 240.0.0.0/4 Reserved space (formerly class E), except 255.255.255.255.
- 255.255.255.255/32 Local network (limited) broadcast address.
Multicast addressing is supported by IPv4 and IPv6. An IP multicast address (also called group or group address) identifies a group of host interfaces, rather than a single one. Most cloud vendors don't allow for multicast and restrict use of communications to unicast from one server to another.
Some of the additional terms that come up are network address translation (NAT), border gateway router (BGP), and firewalls come up around networking discussions. We will defer these conversations to higher layer protocols because they involve more than just the ip address. BGP can be a simple definition that just drops ip addresses and does not pass them outside the corporate network independent of the netmask that the source host uses. If, for example, we want to stop someone from connecting to an ip address outside of our network and force it to go through a firewall or packet filter device a BGP can redirect all traffic through these devices or drop the packets.
In summary, we skimmed over routing. This is a complex subject. We mainly talked about layers 2 and 3 to introduce the terms MAC address, IP address, IPv4, and IPv6. We touched on CIDR and routing tables as well as reserved addresses and BGP and NAT. This is not a complete discussion on these subjects but an introduction of terms. Most cloud vendors do not support multicast or anycast broadcasts inside or outside of their cloud services. Most cloud vendors support IPv4 and IPv6 as well as subnet masking and multiple networks for servers and services. It is important to understand what a router is, how to configure a routing table, and the dangers of creating routing loops. We did not touch on hop count and hop cost because for most cloud implementations the topology is simple and servers inside a cloud implementation is rarely a hop or two away unless you are trying to create a highly available service in another data center, zone, or region. Up next, the data layer and the IP datagram.
Content and feature rich engagement sites can help drive effective interactions with various groups such as customers, partners, and employees, leading to higher satisfaction and loyalty. With Oracle’s collaborative marketing asset development solution, business users with absolutely no website experience can rapidly assemble rich, interactive engagement microsites for marketing and communities. Microsites can be built on the fly with new content and also incorporate existing enterprise content, processes, and social applications all within a single integrated user interface.
Redwood Shores, Calif.—Aug 8, 2016
Asahi Refining, the world’s leading provider of precious metal assaying, refining, and bullion products, selected Oracle Cloud Applications and Oracle Cloud Platform to streamline its procurement and financial processes to get a more comprehensive and accurate picture of its financials to provide better visibility into the business. By moving to the cloud, Asahi Refining has been able to shift its full attention to its core business of refining gold and silver and accelerate business growth.
The ongoing digitization of the refining industry means that organizations need an integrated financial platform to leverage data insights that can help evolve their business models and retain their competitive advantage. To address this market shift, Asahi Refining needed to overhaul its legacy enterprise resource planning (ERP) system, which was difficult to maintain, had limited reporting capabilities and contained fragmented data spread across various silos. The company needed a modern, integrated system to gain the insights needed for swift approvals and decision making.
“In order to update our outdated and over-extended IT infrastructure, we needed to move our financials to a centralized and secure environment,” said Kevin Braddy, IT director, Asahi Refining. “The Oracle ERP Cloud gives us real-time visibility into finance operations across the company and helps drive efficiencies across our financial processes. With this accurate financial information easily at hand, we are able to focus on growing our business.”
Using the Oracle ERP Cloud and Oracle Cloud Platform, Asahi Refining was able to replace its legacy ERP environment with an integrated cloud-based financial system. Within three months, Asahi Refining was able to fully implement the solution and transition to Oracle Self-Service Procurement Cloud, Oracle Financials Cloud, and Oracle Purchasing Cloud. The company now has a highly accurate, 360-degree view of its financial systems and operations. In addition, Asahi Refining was able to standardize reporting and reduce month-end reporting from a week to just three days, while increasing its efficiency in processing receivable transactions.
“We are happy to be working with Asahi Refining to help them transform their business with the Oracle Cloud,” said Amit Zavery, senior vice president, cloud platform and integration, Oracle. “Moving from legacy systems to the cloud enabled Asahi Refining to modernize its technology systems, improving visibility into the business and ultimately accelerating growth and increasing efficiency.”
Asahi Refining used the Oracle Java Cloud and Oracle Database Cloud to seamlessly integrate its Oracle ERP Cloud applications with its legacy ERP system and third-party payroll applications, as well as to validate all data coming into the Oracle ERP Cloud from those legacy applications. Additionally, Asahi Refining has been able to lower its total cost-of-ownership by moving to the cloud, which the company can now leverage to realize additional business efficiencies in the future.
The Oracle Cloud runs in 19 data centers around the world and supports 70+ million users and more than 34 billion transactions each day. With the Oracle Cloud, Oracle delivers the industry’s broadest suite of enterprise-grade cloud services, including Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), and Data as a Service (DaaS).
Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.Trademarks
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.Safe Harbor
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation.
Dans ce blog, je vais vous expliquer comment exporter les backups RMAN sur un « share disk » appartenant à un Domaine.
Assurer la sécurité des données est l’une des tâches principales de l’administrateur :
- La mise en œuvre d’une protection des fichiers sensibles de la base :
- Fichier de contrôle
- Fichiers de journalisation
- La mise en place d’une stratégie de sauvegarde/récupération :
- Adaptée aux contraintes de l’entreprise
- Testée et documentée.
Afin de vous documenter sur les différentes techniques de sauvegarde et de restauration, je vous propose de jeter un coup d’œil à notre page Workshop Oracle Backup Recovery.
Plusieurs d’entre vous utilisent certainement des serveurs Windows pour administrer les bases de données Oracle, cependant il n’est pas toujours évident de les administrer sur un environnement Windows par rapport à Linux.
C’est pourquoi, je vous propose une solution de sauvegarde qui exportera vos backups sur un disque partagé ou un serveur de stockage sur lequel une sauvegarde des backups se fait quotidiennement sur un disque ou une bande.
Voici les étapes à suivre:
- Vérifiez les droits (lecture/écriture) sur le disque partagé
- Configurez le service Oracle et le Listener dans l’outil « services.msc » avec l’utilisateur de service
- Vérifiez que le mot de passe du compte de service n’expire jamais et qu’il ne soit jamais verrouillé ou supprimé.
- Redémarrez les services (oracle et listener)
- Testez les backups avec RMAN
Allez dans le menu « services.msc » et changez le paramètre du service « OracleService_[nom_de_l’instance] » ainsi que le service « Listener » à l’aide de l’utilisateur de service qui fait fonctionner vos bases de données.
Faites un clic droit sur « Propriété » aller sur l’onglet « Connexion » puis sélectionnez « Ce compte ».
Cliquez ensuite sur « Parcourir » puis écrivez le nom de l’utilisateur de service, pour finir cliquez sur « Vérifier les noms » afin de trouver l’utilisateur dans l’Active Directory.
Bien entendu, il est préférable de scripter les backups via le Planificateur de tâches, afin de les exécuter automatiquement. Je vous parlerais de cette prochaine étape lors d’un second blog.
Join us for an Oracle HCM Cloud Customer Forum call on Wednesday, September 7, 2016.
Janet Randolph, vice president, Human Resources, and Rosetta Jasperson, manager HR Operations and Systems, will discuss how Oracle HCM Cloud helped Telogis consolidate its data, integrate its payroll system globally, and improve reporting and data reliability. Randoph and Jasperson also will share the importance of an integrated operation—being able to implement inventory and finance in the cloud along with HCM was a key decision point for the company.
Register now to attend the live Forum on Wednesday, September 7, 2016, at 9:00 a.m. PT and learn more about Telogis’ experience with Oracle HCM Cloud.
Kamil Stawiarski who runs Database Whisperers sp. z o. o. sp. k., an Oracle specialist consulting company in Poland and whose company is also a reseller for our Oracle database security scanner PFCLScan in Poland has invited me to speak....[Read More]
Posted by Pete On 08/08/16 At 12:48 PM
Protection of sensitive data while at-rest, in-motion or in-use all need to be addressed as part of a holistic security strategy. This includes both Personally Identifiable Information (PII) as well as sensitive PeopleSoft system configurations.
When performing a PeopleSoft security audit, Integrigy reviews the use and implementation of encryption within all components of the PeopleSoft technology stack. This includes the following, all which are critical. Review yours today and contact Integrigy with any questions.
- Implementation of Oracle Advanced Security Option (ASO) for Transparent Data Encryption (TDE), Oracle Wallets and encryption key management for database encryption
- Configuration of SQL-NET encryption between database server, application and web servers
- PeopleSoft Pluggable Encryption Technology (PET)
- PeopleSoft client and web services connections. Specifically, we look to ensure that both internal and external network traffic is encrypted using TLS not SSL to encrypt network traffic. TLS is the successor to SSL and is considered more secure.
- Encryption of Tuxedo configurations using the PSADMIN utility
- Encryption of PeopleSoft web server configurations by generating or implementing a new PSCipher key to encrypt values in the web server configuration files.
- Encryption of the Template file. The Template file is used to share configurations among multiple environments (Test, Dev Prod etc...) and passwords stored in the file MUST be encrypted and should not be stored in clear text.
If you have questions, please contact us at firstname.lastname@example.org
Michael A. Miller, CISSP-ISSMP, CCSPReferences
TCP/IP Illustrated starts out by talking about the history of computer connectivity and the evolution of the 7 layer OSI stack. The seven layers consist of physical (1), link (2), network (3), transport (4), session (5), presentation (6), and application (7). Each of these layers have different protocols, methodologies, and incantations that make them unique and worthy of selection for different problems.
The physical layer is the actual connection between two computers. This might be a copper cable, fiber optic cable, or wireless network. The physical connection media is the definition for this layer. Most of us are familiar with a cable that comes out of the wall, switch, or router and plugs into our server or wifi hub. We are also familiar with a wifi or bluetooth connection that allows us to connect without a physical wire connecting us to other computers. We are not going to focus on this layer but assume that we are wirelessly or ethernet connected to the internet and the cloud servers that we are connecting to are wired to an internet connection. We then use the nebulous internet to route our requests to access our cloud server and responses back to us. This will require higher layers of the stack to make this happen but the default is that we are connected to a network in some manner as well as the server that we want to connect to.
The link or data link layer include protocols for connecting to a link layer and exchanging data. Links can be multi-access layers with more than just two computers talking to each other. WiFi and Ethernet networks are examples of a multi-access layer. We can have more than two computers on these networks and all of them can operate at the same time on the network. Not all of the computers can talk at once but they can time slice the network and share the common physical layer together.
The network or internetwork layer (layer 3) is the protocol layer where we frame packets of information and define communication protocols. Protocols like TCP/IP is defined at this layer. We can put a data analyzer on the physical cable and look at bits streaming by on the wire (or wifi) and decode these packets into data and control blocks. The IP or internet protocol layer is defined here as well as other protocols for creating data packets.
The transport layer (layer 4) is the layer where we describe how data is exchanged and deal with collisions, addresses, and different types of services. TCP, for example, exists at this layer and has protocols for dealing with collisions on the network. If two computers are talking at the same time, bits get overwritten and listeners can not properly read the packets. The TCP layer defines how to request retransmission of data as well as how to avoid collisions in the future for short term. Other protocols like UDP and multicast are defined at this layer that allows us to do things like broadcast messages to all hosts on a network and not wait for a response or acknowledgement. We might want to do this for a video broadcast from a single source where we know that we have one transmitter and multiple receivers on a network.
The session layer (layer 5) are handshaking mechanisms to maintain state between data packets. An example of this would be a cookie in a web browser to maintain a relationship between a client and web server. Server affinity and route preferences are also defined at this layer. If we have a pool of web servers and want to send a client back to the web server that it went to previously, this layer helps create this affinity.
The presentation layer (layer 6) is responsible for format conversions and is typically not manipulated or used for internet protocols or communications.
The application layer (layer 7) is where most of the work is done. A web server, for example, uses http as the communication protocol and defines how screens are painted inside a browser and what files are retrieved from a web server. There are hundreds of layers defined here and we will go into a few examples in future blogs.
If we take an overview of TCP/IP Illustrated Volume I we see that chapter 1 covers the OSI stack and introduces networking and the history of networking as well as layer 1 options. Chapter 2 covers layer 3 and all networking options and touches on the differences between IPv4 and IPv6. Chapter 3 covers the link layer or layer 2 focusing on ethernet, bridges, switches, wireless networks, point to point protocols, and tunneling options. Chapter 4 dives into the ARP protocol which is an implementation of layer 3 used to deal with addressing and computers on a network. Chapter 5 covers the IP definition and discusses packet headers and formats. Chapter 6 goes into addressing more and talks about dynamic host configuration protocol (DHCP) for assigning addresses dynamically. Chapter 7 discusses firewalls and routers as well as network address translations (NAT) concepts. This is the layer that typically gets confusing for cloud vendors and leads to different configurations and options when it comes to protecting servers in the cloud. Chapters 8 and 9 deal with internet control message protocol, broadcasting, and multicasting. Most cloud vendors don't deal with this layer and just prohibit the use of this layer. Chapter 10 focuses on UDP and IP fragmentation. Chapter 11 centers on Domain Naming Services. Each cloud vendor addresses this differently with local and global naming services. We will look at the major cloud vendors and see how they address local naming and name resolution. Chapters 12 through 17 deal with the TCP structure, management, and operation. The Stanford class spent most of the semester on this and ways of optimizing errors and issues. Most cloud vendors do this for you and don't really let you manipulate or modify anything presented in these chapters. The book finishes with Chapter 18 by talking about security in all of its flavors and incantations. We will spend a bit of time talking about this layer since it is of major concern for most users.
In review, we are going to go back and look at networking terms, concepts, and buzzwords so that when someone asks us does this cloud service provide xyz you have a strong context of what they are asking. We are not trying to make everyone a networking expert, just trying to level set the language so that we can compare and contrast services between different cloud vendors.
- Partner Webcast - Oracle REST Data Services Communication for Cloud and Mobility (Oracle Partner Hub: ISV Migration Center Team)
via Oracle Partner Hub: ISV Migration Center Team http://ift.tt/1AAiVSD
I visited DataStax on my recent trip. That was a tipping point leading to my recent discussions of NoSQL DBAs and misplaced fear of vendor lock-in. But of course I also learned some things about DataStax and Cassandra themselves.
On the customer side:
- DataStax customers still overwhelmingly use Cassandra for internet back-ends — web, mobile or otherwise as the case might be.
- This includes — and “includes” might be understating the point — traditional enterprises worried about competition from internet-only ventures.
Customers in large numbers want cloud capabilities, as a potential future if not a current need.
One customer example was a large retailer, who in the past was awful at providing accurate inventory information online, but now uses Cassandra for that. DataStax brags that its queries come back in 20 milliseconds, but that strikes me as a bit beside the point; what really matters is that data accuracy has gone from “batch” to some version of real-time. Also, Microsoft is a DataStax customer, using Cassandra (and Spark) for the Office 365 backend, or at least for the associated analytics.
Per Patrick McFadin, the four biggest things in DataStax Enterprise 5 are:
- Graph capabilities.
- Cassandra 3.0, which includes a complete storage engine rewrite.
- Tiered storage/ILM (Information Lifecycle Management).
- Policy-based replication.
Some of that terminology is mine, but perhaps my clients at DataStax will adopt it too.
We didn’t go into as much technical detail as I ordinarily might, but a few notes on that tiered storage/ILM bit are:
- It’s a way to have some storage that’s more expensive (e.g. flash) and some that’s cheaper (e.g. spinning disk). Duh.
- Since Cassandra has a strong time-series orientation, it’s easy to imagine how those policies might be specified.
- Technologically, this is tightly integrated with Cassandra’s compaction strategy.
DataStax Enterprise 5 also introduced policy-based replication features, not all of which are in open source Cassandra. Data sovereignty/geo-compliance is improved, which is of particular importance in financial services. There’s also hub/spoke replication now, which seems to be of particular value in intermittently-connected use cases. DataStax said the motivating use case in that area was oilfield operations, where presumably there are Cassandra-capable servers at all ends of the wide-area network.