Feed aggregator

there is a Bug using MERGE and DUAL together

Tom Kyte - Fri, 2017-11-10 07:35
Consider please the follwing simple table: <code>create table table_1 (c1 varchar2(100), c2 varchar2(100));</code> If we apply the following MERGE command now (attend please the WHERE clause), we get: <code> merge into table_1 tb using (se...
Categories: DBA Blogs

How to hire a Lead Oracle DBA

Tom Kyte - Fri, 2017-11-10 07:35
Hi I'm a Junior Oracle DBA in the new company that I joined in. Our Lead Oracle DBA resigned and my company is screening for new applicants. The boss of our department might ask me to interview the potential Oracle Lead DBA candidate and a...
Categories: DBA Blogs

Formatting negative values to sort correctly but keep the formatting

Tom Kyte - Fri, 2017-11-10 07:35
I have an old and a new query. I need help with the new one. The old query works fine. For the new one, I can't seem to find a way to format two columns (latitude and longitude, I need 6 digits after the decimal) in such a way as they sort correctly....
Categories: DBA Blogs

Accessibility: Firefox Compatibility Issue with the JAWS Screen Reader

PeopleSoft Technology Blog - Thu, 2017-11-09 18:14
We've recently become aware of a significant compatibility issue between Firefox V57 (called "Quantum") and JAWS (the screen reader application from Freedom Scientific).  With V57 of Firefox, each call for JAWS to obtain information takes a long time to process, so performance will suffer.  There are other compatibility issues as well.  Freedom Scientific is working with Mozilla (makers of Firefox) to address the situation. 

Firefox Quantum will not work in any fashion unless customers are running the latest versions of JAWS 2018, ZoomText 11, and MAGic 14. Previous versions of their assistive technology are not compatible with Firefox Quantum. 

This issue cannot be resolved by Oracle/PeopleSoft, but we recommend that our customers using assistive technology do not install Firefox Quantum until Firefox and Freedom Scientific have provided fixes and addressed all compatibility issues.

Here is a blog post from Freedom Scientific on the issue.

Diagnosing EBS 12.2 Upgrade Performance Issues

Steven Chan - Thu, 2017-11-09 12:06

Our Performance team has a deep set of recommendations for minimizing downtimes when upgrading to EBS 12.2.7:

They have recently updated a companion document to that Note. The companion document describes diagnostic strategies and methods to identify and resolve performance issues when upgrading to EBS 12.2 from EBS 11i, 12.0, and 12.1:

This guide covers:

  • Statistics to gather before starting the EBS 12.2 upgrade
  • Performance tuning the Online Patching Enablement phase
  • Obtaining top SQL in cursor caches or AWR
  • Identifying long-running SQL using ALLSTATS
  • Automation options for Display Cursor reports
  • Using SQL Monitor Reports
  • Reporting on CBO statistics for all E-Business Suite tables
  • Diagnostics to gather after each upgrade phase
  • Online Patching diagnostics

Related Articles

Categories: APPS Blogs

Oracle Block Size

Tom Kyte - Thu, 2017-11-09 10:06
Hi Tom, I would be very grateful if you could share your thoughts on Oracle block size. "rule of thumb" is Oracle Database block sizes (2 KB or 4 KB) for online transaction processing (OLTP) or mixed workload environments and larger block size...
Categories: DBA Blogs

Where clause mix of AND and OR with()

Tom Kyte - Thu, 2017-11-09 10:06
Hello, I need to mix and or in where cluse: like: and con1 and (con2 or con3 or con4)... t_where := t_where || ' and a.field1 = ''' || l_1 || '''' || ' ( ' || 'a.field2 = ''' || l_2 || '''' || ' or ' || ......
Categories: DBA Blogs

sql loader and date

Tom Kyte - Thu, 2017-11-09 10:06
hi!!! i am using sqlloader, i have a table T in my database T (empno, start_date date, resign_date date) my data file has data like this (date format IN THE DATAFILE is 'YYYYMMDD') 1, 19990101,20001101 2, 19981215,20010315 3, 19950520...
Categories: DBA Blogs

Tabuler row no

Tom Kyte - Thu, 2017-11-09 10:06
In oracle forms 12c, I created data block with tabula, I changed the otem row number displayed to 10 row, I need to get current selected row no in this tabuler on form at run time not current_record, O mean this number between 1 to 10, I tried get ...
Categories: DBA Blogs

SGA_target is greater than the total physical memory on the server (Windows)

Tom Kyte - Thu, 2017-11-09 10:06
HI Tom, We have a databases running on 11.2.0.3 with memory parameters set as below: This is a windows server 2008 R2. SQL> show parameter sga NAME TYPE VALUE ------------------------------------ --...
Categories: DBA Blogs

What´s your oppinion about the DBA job in the future

Tom Kyte - Thu, 2017-11-09 10:06
Hi Tom, First, i would like to thank you for your site in web. I learn a lot of things with the doubts of my coleages registered in this site. I learn too how to explain to the developers using examples (like you). Well, i would like to know wh...
Categories: DBA Blogs

Healthcare Organizations Choose Oracle EPM Cloud to Modernize Operations

Oracle Press Releases - Thu, 2017-11-09 07:00
Press Release
Healthcare Organizations Choose Oracle EPM Cloud to Modernize Operations Hospitals, healthcare providers and research centers select Oracle EPM Cloud to deliver high-quality patient care and control costs

Redwood Shores, Calif.—Nov 9, 2017

Healthcare organizations across the U.S. have invested in Oracle Enterprise Performance Management (EPM) Cloud solutions to embrace digital technologies and modern best practices. Oracle EPM Cloud healthcare customers include: Harvard Pilgrim Health Care, University of Texas Medical Branch at Galveston, and the University of Texas at Tyler.

To deliver high-quality patient care while controlling costs, healthcare organizations need to rethink existing business processes in order to enhance operational performance. To successfully make this transition and modernize operations, hospitals, health care providers and research centers need accurate and reliable data to plan, monitor, forecast and assess clinical, financial and operational performance across all departments.

Oracle EPM Cloud enables healthcare organizations of any size to drive predictable performance through transparent reporting and accurate forecasting insights. With an intuitive user experience and prebuilt financial functions, Oracle EPM Cloud provides healthcare organizations with greater insights by automating data analysis and creating custom planning and forecasting models. The increased visibility into financial operations delivered by Oracle EPM Cloud also helps healthcare organizations become more efficient and improve decision-making processes.

As the healthcare industry transforms, Oracle EPM Cloud solutions enable healthcare organizations to modernize operations by delivering:

  • Increased Visibility: Oracle EPM Cloud delivers modern, simple and easy-to-use solutions that empower employees by providing a centralized platform that increases collaboration and improves visibility and control.
  • Transparent Reporting: Oracle EPM Cloud enables healthcare organizations to improve profitability and cost management by providing accurate, real-time business insights that help predict performance and improve financial decision making.
  • Cost Management: Oracle EPM Cloud helps healthcare organizations increase efficiencies and spend more time on patient care by standardizing and automating business processes and reducing IT costs.

“We are helping healthcare organizations keep pace in their dynamic environment,” said Hari Sankar, group vice president, EPM product management at Oracle. “Oracle is helping more and more healthcare organizations fully realize the business benefits from the cloud. With Oracle EPM Cloud, these organizations can simplify manual processes to spend more time with their patients.”

Contact Info
Evelyn Tam
Oracle PR
1.650.506.5936
evelyn.tam@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Evelyn Tam

  • 1.650.506.5936

An API First Approach to Microservices Development

OTN TechBlog - Wed, 2017-11-08 17:18

Co-author: Claudio Caldato, Sr. Director Development

Introduction 

Over the last couple of years our work on various microservices platforms in the cloud has brought us into close collaboration and engagement with many customers and as a result we have developed a deep understanding of what developers struggle with when adopting microservices architectures in addition to a deep knowledge of distributed systems. A major motivation for joining Oracle, besides working with a great team of very smart people from startups, Amazon and Microsoft, was the opportunity to build from scratch a platform based on open source components that truly addresses the developer. In this initial blog post on our new platform we will describe what was driving the design of our platform, and present an overview of the architecture. 

What developers are looking for

Moving to microservices is not an easy transition for developers that have been building applications using more traditional methods. There are a lot of new concepts and details developers need to become familiar with and consider when they design a distributed application, which is what a microservice application is. Throw containers and orchestrators into the mix and it becomes clear why many developers struggle to adapt to this new world.  

Developers now need to think about their applications in terms of a distributed system with a lot of moving parts; as a result, challenges such as resiliency, idempotency and eventual consistency, just to name a few, are important aspects they now need to take into account. 

In addition, with the latest trends in microservices design and best practices, they also need to learn about containers and orchestrators to make their applications and services work. Modern cluster management and container orchestration solutions such as Kubernetes, Mesos/Marathon or Docker Swarm are improving over time, which simplifies things such as networking, service discovery, etc., but they are still an infrastructure play. The main goal of these tools and technologies is to handle the process of deploying and connecting services, and guarantee that they keep running in case of failures. These aspects are more connected with the infrastructure used to host the services than the actual services themselves. Developers need to have a solid understanding of how orchestrators work, and they need to take that into account when they build services. Programming model and infrastructure are entangled; there is no clear separation, and developers need to understand the underlying infrastructure to make their services work. 

One obvious thing that we have heard repeatedly from our customers and the open source community is that developers really want to focus on the development of the logic, not on the code necessary to handle the execution environment where the service will be deployed, but what does that really mean?  

It means that above all, developers want to focus on APIs (the only thing needed to connect to another service), develop their services in a reactive style, and sometimes just use ‘functions’ to perform simple operations, when deploying and managing more complex services involves too much overhead.  

There is also a strong preference among developers to have a platform built on an OSS stack to avoid vendor lock-in, and to enable hybrid scenarios where public cloud is used in conjunction with on-premise infrastructure.  

It was the copious feedback heard from customers and developers that served as our main motivation to create an API-first microservices platform, and it is based on the following key requirements: 

  • Developers can focus solely on writing code: API-first approach 
  • It combines the traditional REST-based programming model with a modern reactive event-driven model  
  • It consolidates traditional container-based microservices with a serverless/FaaS infrastructure, offering more flexibility so developers can pick the right tool for the job 
  • Easy onboarding of 'external' services so developers can leverage things such as cloud services, and can connect to legacy or 3rd party services easily 

We were asked many times how we would describe our platform as it covers more than just microservices, so in a humorous moment, we came up with the Grand Unified Theory of Container Native Development

 

The Platform Approach 

So what does the platform look like and what components are being used? Before we get into the details let’s look at our fundamental principles for building out this platform:

  • Opinionated and open: make it easy for developers to get productive right away, but also provide the option to go deep in the stack or even replace modules. 
  • Cloud vendor agnostic: although the platform will work best on our New Application Development Stack customers need to be able to install it on top of any cloud infrastructure. 
  • Open source-based stack: we are strong believers in OSS, and our stack is entirely built upon popular OSS components and will be available as OSS 

The Platform Architecture 

Figure 1 shows the high level architecture of our platform and the functionality of each component. 

Let’s look at all the major components of the platform. We start with the API registry as it changes how developers think about, build, and consume microservices. 

API Registry: 

The API registry stores all the information about available APIs in the cluster. Developers can publish an API to make it easier for other developers to use their service. Developers can search for a particular service or function (if there is a serverless framework installed in the cluster). Developers can test an API against a mock service even though the real service is not ready or deployed yet. To connect to a microservice or function in the cluster, developers can generate a client library in various languages. The client library is integrated into the source code and used to call the service. It will always automatically discover the endpoint in the cluster at runtime so developers don’t have to deal with infrastructure details such as IP address or port number that may change over the lifecycle of the service.  In future versions, we plan to add the ability for developers to set security and routing policies directly in the API registry. 

Event Manager: 

The event manager allows services and functions to publish events that other services and functions can subscribe to. It is the key component that enables an event-driven programming model where EventProviders publish events, and consumers – either functions or microservices – consume them. With the EventManager developers can combine both a traditional REST-based programming model with a reactive/event-driven model in a consolidated platform that offers a consistent experience in terms of workflow and tools. 

Service Broker: 

In our transition to working for a major cloud vendor, we have seen that many customers choose to use managed cloud services instead of running and operating their services themselves on a Kubernetes cluster. A popular example of this is Redis cache, offered as a managed service by almost all major cloud providers. As a result, it is very common that a microservice-based application not only consists of services developed by the development team but also of managed cloud services. Kubernetes has introduced a great new feature called service catalog which allows the consumption of external services within a Kubernetes cluster. We have extended our initial design to not only configure the access to external services, but also to register user services with the API registry, so that developers can easily consume them along with the managed services. 

In this way external services, such as the ones provided by the cloud vendor, can be consumed like any other service in the cluster with developers using the same workflow: identify the APIs they want to use, generate the client library, and use it to handle the actual communication with the service. 

Service Broker is also our way to help developers engaged in modernizing their existing infrastructure, for instance by enabling them to package their existing code in containers that can be deployed in the cluster. We are also considering solving for scenarios in which there are existing applications that cannot be modernized; in this case, the Service Broker can be used to ‘expose’ a proxy service that publishes a set of APIs in the API Registry, thereby making the consumption of the external/legacy system similar to using any other microservice in the cluster.  

Kubernetes and Istio: 

We chose Kubernetes as the basis for our platform as it is emerging as the most popular container management platform to run microservices. Another important factor is that the community around Kubernetes is growing rapidly, and that there is Kubernetes support with every major cloud vendor.   

As mentioned before one of our main goals is to reduce complexity for developers. Managing communications among multiple microservices can be a challenging task. For this reason, we determined that we needed to add Istio as a service mesh to our platform. With Istio we get monitoring, diagnostics, complex routing, resiliency and policies for free. This removes a big burden from developers as they would otherwise need to implement those features; with Istio, they are now available at the platform level. 

Monitoring 

Monitoring is an important component of a microservices platform. With potentially a lot of moving parts, the system requires having a way to monitor its behavior at runtime. For our microservices platform we chose to offer an out-of-the-box monitoring solution which is, like the other components in our platform, based on well consolidated and battle-tested technologies such as Prometheus, Zipkin/Jaeger, Grafana and Vizsceral. 

In the spirit of pushing the API-first approach to monitoring as well, our monitoring solution offers developers the ability to see how microservices are connected to each other (via Vizsceral), see data flowing across them and, in the future, will show insight into which APIs have been used. Developers can then use distributed tracing information in Zipkin/Jaeger to investigate potential latency issues or improve the efficiency of their services. In the future, we plan to add integration with other services. For instance, we will add the ability to correlate requests between microservices with data structures inside the JVM so developers can optimize across multiple microservices by following how data is being processed for each request. 

What’s Next? 

This is an initial overview of our new platform and some insight into our motivation, and the design guidelines that we used. We will follow with more blogs that will go deeper into the various aspects of the platform as we get closer to our initial OSS release early 2018. Meanwhile, please take a look at our JavaOne session

For more background on this topic, please see our other blog posts in the Getting Started with Microservices series. Part 1 discusses some of the main advantages of microservices, and touches on some areas to consider when working with them. Part 2 considers how containers fit into the microservices story. Part 3 looks at some basic patterns and best practices for implementing microservices. Part 4 examines the critical aspects of using DevOps principles and practices with containerized microservices. 

Related content

List of Networking Concepts to Pass AWS Cloud Architect Associate Exam

Pakistan's First Oracle Blog - Wed, 2017-11-08 16:31
Networking is a pivotal concept in cloud computing. Knowing it is a must to be a successful Cloud Architect. Of course you won't be physically peeling the cables to put RJ45 connectors on but you must know various facets of logical networking.


You never know what exactly gonna be in the exam but that's what exams are all about. In order to prepare for AWS Cloud Architect Associate exam you must thoroughly read and understand the following from AWS documentation:


Before you read above, it would be very beneficial if you also go and learn following networking concepts:

  • LAN
  • WAN
  • IP addressing
  • Difference between IPV4 and IPV6
  • CIDR
  • SUBNET
  • VPN
  • NAT
  • DNS
  • OSI Layers
  • TCP
  • UDP
  • ICMP
  • Router, Switch
  • HTTP
  • NACL
  • Internet Gateway
  • Virtual Private Gateway
  • Caching, Latency
  • Networking commands like Route, netstat, ping, tracert etc
Feel free to add any other network concept in comments which I might have missed.
Categories: DBA Blogs

IS JSON is not working for Nested Jsons

Tom Kyte - Wed, 2017-11-08 15:46
Hi Team, In one of our table we have column which holds JSON format text. data will be inserted to this column from a file we receive from Vendor. While Inserting the data to this column we dont't validate whether its in JSON format or not but bef...
Categories: DBA Blogs

Oracle Database 12c on Oracle Linux: Firewall configuration to access Enterprise Manager on http://host:5500/em

Dietrich Schroff - Wed, 2017-11-08 15:23
If you have installed your database on Oracle Linux, first step is to access the enterprise manager via port 5500 (https://localhost:5500/em). If you want to access this URL from another host, you have to check and change the firewall settings:

[root@localhost system]# service firewalld status
Redirecting to /bin/systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: active (running) since So 2017-10-01 18:22:30 CEST; 21h ago
     Docs: man:firewalld(1)
 Main PID: 684 (firewalld)
   CGroup: /system.slice/firewalld.service
           └─684 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
For a quick check disabling the firewall with
service firewalld stopmight be ok, but the better way is to allow port 5500.
Therefor check the active zone and the services:

[root@localhost system]# firewall-cmd --get-active-zone
public
  interfaces: enp0s3
[root@localhost system]# firewall-cmd --zone=public --list-services
ssh dhcpv6-client With this knowlegde you can add port 5500 to your firewall:
[root@localhost system]# firewall-cmd --zone=public --add-port=5500/tcp
success
[root@localhost system]# firewall-cmd --permanent --zone=public --add-port=5500/tcp
successNow you should get the following:
[root@localhost system]# firewall-cmd --zone=public --list-ports
5500/tcpand you will get in your Browser by accessing the URL https://hostname:5500/em:
 (still Flash...)





Introducing Dev Gym! Free Training on SQL and More

OTN TechBlog - Wed, 2017-11-08 10:52

There are many ways to learn. For example, you can read a book or blog post, watch a video, or listen to a podcast. All good stuff, which is what you'd expect me to say since I am the author of ten books on the Oracle PL/SQL language, and offer scores of videos and articles on my YouTube channel and blog, respectively.

But there's one problem with those learning formats: they're passive. One way or another, you sit there, and ingest data through your eyes and ears. Nothing wrong with that, but we all know that when it comes to writing code, that sort of knowledge is entirely theoretical.

If you want to get stronger, you can't just read about weightlifting and running. 

You've got to hit the gym and lift some weights. You've got to put on your running shoes and pound the pavement. 

Or as Confucius said it back in 450 BC:

Tell me and I will forget.
Show me and I may remember.
Involve me and I will understand

It's the same with programming. Until you start writing code, and until you start reading and struggling to understand code, you haven't really learned anything.  To get good at programming, you need to engage in some active learning.

That's what the Oracle Dev Gym is all about. And it's absolutely, totally free. 

Learn from Quizzes

Multiple choice quizzes are the core learning mechanism on the Oracle Dev Gym. Our library of over 2,500 quizzes deepen your expertise by challenging you to read and understand code, a great complement to writing and running code.

The home page offers several featured quizzes that are hand-picked by experts from the Dev Gym's library of over 2,000 quizzes.

Looking for something in particular? Enter a keyword or two in the search bar and we'll show you what we've got on that topic.

After submitting your answer, you can explore the quiz's topic in more detail, with full verification code scripts, links to related resources and other quizzes, and discussion on the quiz.

You accumulate points for all the quizzes you answer, but your performance on these quizzes is not ranked. To play competitively against other developers, try our weekly Open Tournaments.

Check out this video on Dev Gym quizzes. 

Learn from Workouts

Quizzes are great, but when you know nothing about the topic of a quiz, they can leave you rather more confused than educated.

So to help you get started with concepts, we’ve created workouts. These contain resources to teach you about an aspect of programming, followed up by questions on the topic to test and reinforce your newly-gained knowledge.

A workout typically consists of a video or article followed by several quizzes. But a workout could also consist simply of a set of quizzes. Either way, go through the exercises of the workout and you will find yourself better able to tackle your real world programming challenges. Build your own custom workout, pick from available workouts, and set up daily workouts (single quiz workouts that expire each day).

Check out this video on Dev Gym workouts. 

Learn from Classes

Perhaps you’re looking for something more structured to help you learn. Then a Dev Gym class might be a perfect fit.

You can think of these as "mini-MOOCS". A MOOC is a massive online open class. The Oracle Learning Library offers a variety of MOOCs and I strongly encourage you to try them out. Generally, you should expect a 3-5 hour per week commitment, over several weeks. 

Dev Gym class are typically lighter-weight. Each class module consists of a video or blog post, followed by several quizzes to reinforce what you've learned. 

A great example of a Dev Gym class is Database of Developers, a 12-week course by Chris Saxon, a member of the AskTOM Answer Team and all around SQL wizard.

Check out this video on Dev Gym classes. 

Open Tournaments

Sometimes you just want to learn, and other times you want to test that knowledge against other developers. Let's face it: lots of humans like to compete, and we make it easy for you to do that with our weekly Open tournaments.

Each Saturday, we publish a brand-new quiz on SQL, PL/SQL, database design and logic (this list will likely grow over time). You have until the following Friday to submit your answer. And if you don't want to compete but still want to tackle those brand-new quizzes, we let you opt-out of ranking.

But for those of you who like to compete, you can check your rankings on the Leaderboard to see how you did the previous week, month, quarter and year. And if finish the year ranked in the top 50 in a particular technology, you are then eligible to compete in the annual championship.

Note that we do not show the results of your submission for an Open tournament until that week is over. Since the quiz is competitive, we don't want to make it easy for players to share results with others who may not yet have taken the quiz. And since the quiz is competitive, we also have rules against cheating. Read Competition Integrity for a description of what constitutes cheating at the Oracle Dev Gym.

Work Out Those Oracle Muscles!

So...are you ready to start working out those Oracle muscles and stretch your Oracle skills?

Visit the Oracle Dev Gym. Take a quiz, step up to a workout, or explore our classes.

Oh, and did I mention? It's all free!

 

Smartphones, IoT and Connected Cars Fueling Rise in LTE Network Traffic

Oracle Press Releases - Wed, 2017-11-08 07:00
Press Release
Smartphones, IoT and Connected Cars Fueling Rise in LTE Network Traffic Annual Oracle Index Provides Communications Professionals a Road Map to Better Plan For and Manage Global Growth in LTE Diameter Signaling

AFRICACOM 2017, SOUTH AFRICA and REDWOOD SHORES, Calif.—Nov 8, 2017

Oracle today announced the “Oracle Communications LTE Diameter Signaling Index, Sixth Edition,” highlighting the continued explosive growth in LTE Diameter Signaling traffic. Fueled heavily by the proliferation of smartphones and rise of Internet of Things (IoT) enabled devices, the report demonstrates that Diameter signaling shows no sign of slowing and is expected to generate 595 million messages per second (MPS) by 2021. Other key developments impacting LTE network traffic include LTE broadcast, VoLTE, and signaling associated with the policy management required to support more sophisticated data plans and applications. Connected cars also continue to show strong network traffic momentum, with 9.4 MPS and a compound annual growth rate (CAGR) of 30 percent.

The report was designed as a tool for communications service providers (CSPs) network engineers and executives to plan for expected increases in signaling capacity over the next five years. Download the Full Report and Infographic.

It is anticipated that growth will continue in Diameter even as 5G implementations begin. While Diameter will not be the main signaling protocol of 5G, it will remain as an important part of 5G networks.

For example, according to Statista.com, smartphones will represent 76 percent of wireless connections by 2021. As consumers maintain their appetite for “always on” connections to increasingly sophisticated and data intensive applications such as gaming and video, LTE network traffic will continue to skyrocket. Likewise, devices that reach beyond the traditional mobile handset, such as IoT sensors used in everything from tracking available parking spots and moisture in crops to lost pets or workers on a job site will have a significant impact on Diameter signaling growth.

“Diameter signaling traffic continues to grow significantly with little end in sight. While smartphones continue to be the traffic leader, applications such as connected cars and IoT promise a significant impact to network traffic in years to come,” said Greg Collins, Founder and Principal Analyst, Exact Ventures. “Diameter signaling controllers continue to be vital network elements, which help enable operators to secure their network borders and to efficiently and effectively route signaling traffic. As such, it is critical for CSPs to understand what’s driving traffic and where, enabling them to avoid signaling traffic issues than can cause network outages, thereby reducing customer satisfaction and increasing customer churn.”

 “With consumer expectations at an all-time high, it’s more critical than ever that CSPs innovate and plan for continued Diameter signaling growth in order to stay relevant,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “The cloud continues to offer one of the clearest avenues for CSPs to accelerate and achieve these goals.”

Oracle helps CSPs create a more scalable and reliable Diameter signaling infrastructure with Oracle Communications Diameter Signaling Router and Oracle Communications Policy Management.

LTE Diameter Signaling Traffic by Region
  • Latin America and the Caribbean continues to show accelerated growth in Diameter networks. The region will generate 52 million MPS by 2021, a CAGR of 34 percent. Brazil is the largest contributor of Diameter signaling, followed by Mexico. 

  • The Middle East will reach 27.9 million MPS of Diameter signaling by 2021, a CAGR of 23 percent. Turkey and Iran are the largest generators in the region.

  • Africa continues to show strong growth in Diameter signaling. The region will generate 20 million MPS by 2021, a CAGR of 63 percent. Egypt is the top generator, followed by Nigeria and South Africa.

  • Central and Southern Asia will account for 10 percent of the world’s Diameter signaling, reaching 62.4 million MPS, a CAGR of 38 percent, by 2021. Policy management is the largest generator of Diameter signaling in the region, generating 10.4 percent of the world’s policy-related Diameter signaling. Pakistan, followed by India, is the largest generators.

  • Oceania, Eastern and South-Eastern Asia generates nearly half of the world’s Diameter signaling (45 percent). By 2021, the region will generate 265 million MPS of Diameter signaling, a CAGR of 18 percent. The region is also responsible for 44 percent of the world’s policy Diameter signaling, generating 131 million MPS by 2021. China alone will generate 83.5 million MPS of policy generated Diameter signaling by 2021. Indonesia is also showing strong growth.

  • North America leads the world in LTE penetration as service providers move aggressively to sunset 2G and 3G services in favor of 4G/5G. The region will show moderate growth in the coming year, generating 59.3 million MPS by 2021, a CAGR of 12 percent.

  • Eastern and Western Europe. Eastern Europe will generate 48 million MPS of Diameter signaling, a CAGR of 46 percent by 2021. Russia, followed by Poland, are the strongest generators in the region. In comparison, Western Europe will generate 61 million MPS, a CAGR of 23 percent. The United Kingdom generates the most Diameter signaling in the region today, but Germany will surpass the UK generating 6.4 million MPS of Diameter signaling by 2021. 


Usage and Citation

Oracle permits the media, financial and industry analysts, service providers, regulators, and other third parties to cite this research with the following attribution:
Source: “Oracle Communications LTE Diameter Signaling Index, Sixth Edition.”

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.415.856.5145

Displaying the contents of a PostgreSQL data file with pg_filedump

Yann Neuhaus - Wed, 2017-11-08 04:34

Did you ever wonder what exactly is in a PostgreSQL data file? Usually you don’t care, I agree. But there might be situations where knowing how you can do this might be a great help. Maybe your file is corrupted and you want to recover as much data as possible? Maybe you just want to do some research. There is a utility called pg_filedump which makes this pretty easy. Lets go …

Before you try to install pg_filedump you’ll need to make sure that all the header files are there in your PostgreSQL installation. Once you have that the installation is as simple as:

postgres@pgbox:/home/postgres/ [PG10] tar -axf pg_filedump-REL_10_0-c0e4028.tar.gz 
postgres@pgbox:/home/postgres/ [PG10] cd pg_filedump-REL_10_0-c0e4028
postgres@pgbox:/home/postgres/pg_filedump-REL_10_0-c0e4028/ [PG10] make
postgres@pgbox:/home/postgres/pg_filedump-REL_10_0-c0e4028/ [PG10] make install

If everything went fine the utility should be there:

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump -h

Version 10.0 (for PostgreSQL 10.x)
Copyright (c) 2002-2010 Red Hat, Inc.
Copyright (c) 2011-2017, PostgreSQL Global Development Group

Usage: pg_filedump [-abcdfhikxy] [-R startblock [endblock]] [-D attrlist] [-S blocksize] [-s segsize] [-n segnumber] file

Display formatted contents of a PostgreSQL heap/index/control file
Defaults are: relative addressing, range of the entire file, block
               size as listed on block 0 in the file

The following options are valid for heap and index files:
  -a  Display absolute addresses when formatting (Block header
      information is always block relative)
  -b  Display binary block images within a range (Option will turn
      off all formatting options)
  -d  Display formatted block content dump (Option will turn off
      all other formatting options)
  -D  Decode tuples using given comma separated list of types
      Supported types:
        bigint bigserial bool char charN date float float4 float8 int
        json macaddr name oid real serial smallint smallserial text
        time timestamp timetz uuid varchar varcharN xid xml
      ~ ignores all attributes left in a tuple
  -f  Display formatted block content dump along with interpretation
  -h  Display this information
  -i  Display interpreted item details
  -k  Verify block checksums
  -R  Display specific block ranges within the file (Blocks are
      indexed from 0)
        [startblock]: block to start at
        [endblock]: block to end at
      A startblock without an endblock will format the single block
  -s  Force segment size to [segsize]
  -n  Force segment number to [segnumber]
  -S  Force block size to [blocksize]
  -x  Force interpreted formatting of block items as index items
  -y  Force interpreted formatting of block items as heap items

The following options are valid for control files:
  -c  Interpret the file listed as a control file
  -f  Display formatted content dump along with interpretation
  -S  Force block size to [blocksize]

Report bugs to 

As we want to dump a file we obviously need a table with some data, so:

postgres=# create table t1 ( a int, b varchar(50));
CREATE TABLE
postgres=# insert into t1 (a,b) select a, md5(a::varchar) from generate_series(1,10) a;
INSERT 0 10

Get the name of the file:

postgres=# select * from pg_relation_filenode('t1');
 pg_relation_filenode 
----------------------
                24702
(1 row)

Look it up in PGDATA:

postgres@pgbox:/home/postgres/ [PG10] cd $PGDATA
postgres@pgbox:/u02/pgdata/PG10/ [PG10] find . -name 24702
./base/13212/24702

… and dump it:

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump ./base/13212/24702

*******************************************************************
* PostgreSQL File/Block Formatted Dump Utility - Version 10.0
*
* File: ./base/13212/24702
* Options used: None
*
* Dump created on: Wed Nov  8 10:39:33 2017
*******************************************************************
Error: Unable to read full page header from block 0.
  ===> Read 0 bytes

Hm, nothing in there. Why? The reasons is easy: The data is there in PostgreSQL but it is only WAL logged at the moment and not yet in the datafile as no checkpoint happened (in this case):

postgres=#  checkpoint;
CHECKPOINT
Time: 100.567 ms

Do it again:

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump ./base/13212/24702

*******************************************************************
* PostgreSQL File/Block Formatted Dump Utility - Version 10.0
*
* File: ./base/13212/24702
* Options used: None
*
* Dump created on: Wed Nov  8 10:40:45 2017
*******************************************************************

Block    0 ********************************************************
 -----
 Block Offset: 0x00000000         Offsets: Lower      64 (0x0040)
 Block: Size 8192  Version    4            Upper    7552 (0x1d80)
 LSN:  logid      0 recoff 0x478b2c48      Special  8192 (0x2000)
 Items:   10                      Free Space: 7488
 Checksum: 0x0000  Prune XID: 0x00000000  Flags: 0x0000 ()
 Length (including item array): 64

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
 Item   4 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
 Item   5 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
 Item   6 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
 Item   7 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
 Item   8 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
 Item   9 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
 Item  10 -- Length:   61  Offset: 7552 (0x1d80)  Flags: NORMAL


*** End of File Encountered. Last Block Read: 0 ***

Here we go. What can we learn from that output. This is not really human readable but at least we see that there are ten rows. We can also list the actual contents of the rows:

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump -f ./base/13212/24702

*******************************************************************
* PostgreSQL File/Block Formatted Dump Utility - Version 10.0
*
* File: ./base/13212/24702
* Options used: -f 
*
* Dump created on: Wed Nov  8 10:41:21 2017
*******************************************************************

Block    0 ********************************************************
 -----
 Block Offset: 0x00000000         Offsets: Lower      64 (0x0040)
 Block: Size 8192  Version    4            Upper    7552 (0x1d80)
 LSN:  logid      0 recoff 0x478b2c48      Special  8192 (0x2000)
 Items:   10                      Free Space: 7488
 Checksum: 0x0000  Prune XID: 0x00000000  Flags: 0x0000 ()
 Length (including item array): 64

  0000: 00000000 482c8b47 00000000 4000801d  ....H,.G....@...
  0010: 00200420 00000000 c09f7a00 809f7a00  . . ......z...z.
  0020: 409f7a00 009f7a00 c09e7a00 809e7a00  @.z...z...z...z.
  0030: 409e7a00 009e7a00 c09d7a00 809d7a00  @.z...z...z...z.

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
  1fc0: 96020000 00000000 00000000 00000000  ................
  1fd0: 01000200 02081800 01000000 43633463  ............Cc4c
  1fe0: 61343233 38613062 39323338 32306463  a4238a0b923820dc
  1ff0: 63353039 61366637 35383439 62        c509a6f75849b   

 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
  1f80: 96020000 00000000 00000000 00000000  ................
  1f90: 02000200 02081800 02000000 43633831  ............Cc81
  1fa0: 65373238 64396434 63326636 33366630  e728d9d4c2f636f0
  1fb0: 36376638 39636331 34383632 63        67f89cc14862c   

 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
  1f40: 96020000 00000000 00000000 00000000  ................
  1f50: 03000200 02081800 03000000 43656363  ............Cecc
  1f60: 62633837 65346235 63653266 65323833  bc87e4b5ce2fe283
  1f70: 30386664 39663261 37626166 33        08fd9f2a7baf3   

 Item   4 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
  1f00: 96020000 00000000 00000000 00000000  ................
  1f10: 04000200 02081800 04000000 43613837  ............Ca87
  1f20: 66663637 39613266 33653731 64393138  ff679a2f3e71d918
  1f30: 31613637 62373534 32313232 63        1a67b7542122c   

 Item   5 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
  1ec0: 96020000 00000000 00000000 00000000  ................
  1ed0: 05000200 02081800 05000000 43653464  ............Ce4d
  1ee0: 61336237 66626263 65323334 35643737  a3b7fbbce2345d77
  1ef0: 37326230 36373461 33313864 35        72b0674a318d5   

 Item   6 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
  1e80: 96020000 00000000 00000000 00000000  ................
  1e90: 06000200 02081800 06000000 43313637  ............C167
  1ea0: 39303931 63356138 38306661 66366662  9091c5a880faf6fb
  1eb0: 35653630 38376562 31623264 63        5e6087eb1b2dc   

 Item   7 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
  1e40: 96020000 00000000 00000000 00000000  ................
  1e50: 07000200 02081800 07000000 43386631  ............C8f1
  1e60: 34653435 66636565 61313637 61356133  4e45fceea167a5a3
  1e70: 36646564 64346265 61323534 33        6dedd4bea2543   

 Item   8 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
  1e00: 96020000 00000000 00000000 00000000  ................
  1e10: 08000200 02081800 08000000 43633966  ............Cc9f
  1e20: 30663839 35666239 38616239 31353966  0f895fb98ab9159f
  1e30: 35316664 30323937 65323336 64        51fd0297e236d   

 Item   9 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
  1dc0: 96020000 00000000 00000000 00000000  ................
  1dd0: 09000200 02081800 09000000 43343563  ............C45c
  1de0: 34386363 65326532 64376662 64656131  48cce2e2d7fbdea1
  1df0: 61666335 31633763 36616432 36        afc51c7c6ad26   

 Item  10 -- Length:   61  Offset: 7552 (0x1d80)  Flags: NORMAL
  1d80: 96020000 00000000 00000000 00000000  ................
  1d90: 0a000200 02081800 0a000000 43643364  ............Cd3d
  1da0: 39343436 38303261 34343235 39373535  9446802a44259755
  1db0: 64333865 36643136 33653832 30        d38e6d163e820   



*** End of File Encountered. Last Block Read: 0 ***

But this does not help much either. When you want to see the contents in human readable format use the “-D” switch and provide the list of data types you want to decode:

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump -D int,varchar ./base/13212/24702

*******************************************************************
* PostgreSQL File/Block Formatted Dump Utility - Version 10.0
*
* File: ./base/13212/24702
* Options used: -D int,varchar 
*
* Dump created on: Wed Nov  8 10:42:58 2017
*******************************************************************

Block    0 ********************************************************
 -----
 Block Offset: 0x00000000         Offsets: Lower      64 (0x0040)
 Block: Size 8192  Version    4            Upper    7552 (0x1d80)
 LSN:  logid      0 recoff 0x478b2c48      Special  8192 (0x2000)
 Items:   10                      Free Space: 7488
 Checksum: 0x0000  Prune XID: 0x00000000  Flags: 0x0000 ()
 Length (including item array): 64

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
COPY: 1	c4ca4238a0b923820dcc509a6f75849b
 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
COPY: 2	c81e728d9d4c2f636f067f89cc14862c
 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
COPY: 3	eccbc87e4b5ce2fe28308fd9f2a7baf3
 Item   4 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
COPY: 4	a87ff679a2f3e71d9181a67b7542122c
 Item   5 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
COPY: 5	e4da3b7fbbce2345d7772b0674a318d5
 Item   6 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
COPY: 6	1679091c5a880faf6fb5e6087eb1b2dc
 Item   7 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
COPY: 7	8f14e45fceea167a5a36dedd4bea2543
 Item   8 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
COPY: 8	c9f0f895fb98ab9159f51fd0297e236d
 Item   9 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
COPY: 9	45c48cce2e2d7fbdea1afc51c7c6ad26
 Item  10 -- Length:   61  Offset: 7552 (0x1d80)  Flags: NORMAL
COPY: 10	d3d9446802a44259755d38e6d163e820

And now we can see it. This is the same data as if you’d do a select on the table:

postgres=# select * from  t1;
 a  |                b                 
----+----------------------------------
  1 | c4ca4238a0b923820dcc509a6f75849b
  2 | c81e728d9d4c2f636f067f89cc14862c
  3 | eccbc87e4b5ce2fe28308fd9f2a7baf3
  4 | a87ff679a2f3e71d9181a67b7542122c
  5 | e4da3b7fbbce2345d7772b0674a318d5
  6 | 1679091c5a880faf6fb5e6087eb1b2dc
  7 | 8f14e45fceea167a5a36dedd4bea2543
  8 | c9f0f895fb98ab9159f51fd0297e236d
  9 | 45c48cce2e2d7fbdea1afc51c7c6ad26
 10 | d3d9446802a44259755d38e6d163e820
(10 rows)

What happens when we do an update?:

postgres=# update t1 set b = 'a' where a = 4;
UPDATE 1
postgres=# checkpoint ;
CHECKPOINT

How does it look like in the file?

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump -D int,varchar ./base/13212/24702

*******************************************************************
* PostgreSQL File/Block Formatted Dump Utility - Version 10.0
*
* File: ./base/13212/24702
* Options used: -D int,varchar 
*
* Dump created on: Wed Nov  8 11:12:35 2017
*******************************************************************

Block    0 ********************************************************
 -----
 Block Offset: 0x00000000         Offsets: Lower      68 (0x0044)
 Block: Size 8192  Version    4            Upper    7520 (0x1d60)
 LSN:  logid      0 recoff 0x478c2998      Special  8192 (0x2000)
 Items:   11                      Free Space: 7452
 Checksum: 0x0000  Prune XID: 0x00000298  Flags: 0x0000 ()
 Length (including item array): 68

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
COPY: 1	c4ca4238a0b923820dcc509a6f75849b
 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
COPY: 2	c81e728d9d4c2f636f067f89cc14862c
 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
COPY: 3	eccbc87e4b5ce2fe28308fd9f2a7baf3
 Item   4 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
COPY: 4	a87ff679a2f3e71d9181a67b7542122c
 Item   5 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
COPY: 5	e4da3b7fbbce2345d7772b0674a318d5
 Item   6 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
COPY: 6	1679091c5a880faf6fb5e6087eb1b2dc
 Item   7 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
COPY: 7	8f14e45fceea167a5a36dedd4bea2543
 Item   8 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
COPY: 8	c9f0f895fb98ab9159f51fd0297e236d
 Item   9 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
COPY: 9	45c48cce2e2d7fbdea1afc51c7c6ad26
 Item  10 -- Length:   61  Offset: 7552 (0x1d80)  Flags: NORMAL
COPY: 10	d3d9446802a44259755d38e6d163e820
 Item  11 -- Length:   30  Offset: 7520 (0x1d60)  Flags: NORMAL
COPY: 4	a

*** End of File Encountered. Last Block Read: 0 ***

The a=4 row is still there but we got a new one (Item 11) which is our update. Remember that it is the job of vacuum to recycle the dead/old rows:

postgres=# vacuum t1;
VACUUM
postgres=# checkpoint ;
CHECKPOINT

Again (just displaying the data here):

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
COPY: 1	c4ca4238a0b923820dcc509a6f75849b
 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
COPY: 2	c81e728d9d4c2f636f067f89cc14862c
 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
COPY: 3	eccbc87e4b5ce2fe28308fd9f2a7baf3
 Item   4 -- Length:    0  Offset:   11 (0x000b)  Flags: REDIRECT
 Item   5 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
COPY: 5	e4da3b7fbbce2345d7772b0674a318d5
 Item   6 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
COPY: 6	1679091c5a880faf6fb5e6087eb1b2dc
 Item   7 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
COPY: 7	8f14e45fceea167a5a36dedd4bea2543
 Item   8 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
COPY: 8	c9f0f895fb98ab9159f51fd0297e236d
 Item   9 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
COPY: 9	45c48cce2e2d7fbdea1afc51c7c6ad26
 Item  10 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
COPY: 10	d3d9446802a44259755d38e6d163e820
 Item  11 -- Length:   30  Offset: 7584 (0x1da0)  Flags: NORMAL
COPY: 4	a

… and “Item 4″ is gone (somewhere else). The same happens when you delete data:

postgres=# delete from t1 where a = 4;
DELETE 1
postgres=# vacuum t1;
VACUUM
postgres=# checkpoint;
CHECKPOINT

You’ll notice that both, Items 4 and 11, are now gone (UNUSED):

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
COPY: 1	c4ca4238a0b923820dcc509a6f75849b
 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
COPY: 2	c81e728d9d4c2f636f067f89cc14862c
 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
COPY: 3	eccbc87e4b5ce2fe28308fd9f2a7baf3
 Item   4 -- Length:    0  Offset:    0 (0x0000)  Flags: UNUSED
 Item   5 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
COPY: 5	e4da3b7fbbce2345d7772b0674a318d5
 Item   6 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
COPY: 6	1679091c5a880faf6fb5e6087eb1b2dc
 Item   7 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
COPY: 7	8f14e45fceea167a5a36dedd4bea2543
 Item   8 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
COPY: 8	c9f0f895fb98ab9159f51fd0297e236d
 Item   9 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
COPY: 9	45c48cce2e2d7fbdea1afc51c7c6ad26
 Item  10 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
COPY: 10	d3d9446802a44259755d38e6d163e820
 Item  11 -- Length:    0  Offset:    0 (0x0000)  Flags: UNUSED

So far for the introduction of pg_filedump, more to come in more detail.

 

Cet article Displaying the contents of a PostgreSQL data file with pg_filedump est apparu en premier sur Blog dbi services.

RMAN Backup script

DBA Scripts and Articles - Wed, 2017-11-08 03:54

This is a sample backup script I used, it has already a lot of options. Feel free to make any modification you want. If you add some good enhancements, let me know I can put them here so everybody can profit from them. RMAN Backup script [crayon-5a031da255613263047755/]  

The post RMAN Backup script appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator