Pakistan's First Oracle Blog

Subscribe to Pakistan's First Oracle Blog feed
Blog By Fahd Mirza ChughtaiFahd Mirzahttp://www.blogger.com/profile/14722451950835849728noreply@blogger.comBlogger340125
Updated: 14 hours 1 min ago

Securing the CICD Pipeline in Cloud

Wed, 2022-12-07 16:15

 CICD has become ubiquitous in almost every organization in one form or another to release code. This code could be application related or could be for provisioning the infrastructure. 

Securing the CICD pipeline in cloud sprawl shouldn't be an afterthought. There are numerous threat vectors which could compromise a CICD pipeline. These threat vectors range from liberal IAM policies to overlooked auto-merge feature and from ignored build process to the ungoverned third party packages usage.

Here are Top 10 CI/CD Security Risks from Cider.


Cider says, that this document helps defenders identify focus areas for securing their CI/CD ecosystem. It is the result of extensive research into attack vectors associated with CI/CD, and the analysis of high profile breaches and security flaws. Numerous industry experts across multiple verticals and disciplines came together to collaborate on this document to ensure its relevance to today’s threat landscape, risk surface, and the challenges that defenders face in dealing with these risks.


Categories: DBA Blogs

How to change Launch Configuration to Launch Template in AWS

Thu, 2022-12-01 21:59

Here is step by step guide as  how to change launch configuration to launch template in AWS for an autoscaling group. It's actually quite simple and straight forward.

There has been a notification in AWS that was sent out this week to following accounts that make use of Amazon EC2 Auto Scaling launch configurations. Amazon EC2 Launch Configurations will Deprecate support for new Instances. After December 31, 2022 no new Amazon Elastic Compute Cloud (Amazon EC2) instance types will be added to launch configurations. After this date, existing launch configurations will continue to work, but new EC2 instances will only be supported through launch templates.


In order to update the ASG, you need to follow below steps:


1. Create a launch template and paste the user data scripts in it and save it. Also, make sure that you are using the correct AMI ID in it.


2. Once launch template is created then navigate your respective auto scaling group and in the details section of the ASG, click on "edit" button in the launch configuration section.  There you will get an option on the top like "Switch to Launch Template".


3. Then select your newly created launch template and save the changes

Here is the document to create launch template.

Here is the document to know how to replace a launch configuration with a launch template.

The existing instances will keep in running state. Only new instances will be launched using launch template. On the ASG console, you can check the instance is launched using launch template in the instance management section. 

For the instances perspective testing like application is running or not or instance is working properly or not, for this you can login the instance and verify the details. It will not automatically launch an instance in the ASG after setting it to launch template. you would have to change the desired capacity to launch a new instance using the launch template.

Categories: DBA Blogs

Steampipe Brings SQL to AWS Cloud APIs

Tue, 2022-11-29 21:30
A number of database administrators from SQL, Oracle and other relational database services have transitioned into cloud engineering in AWS over the last few years. Most of the still yearn and miss their SQL queries to pull out the data. 


I have been using Boto3 API calls in my Python and NodeJS script for sometime to pull out data out of AWS services. This data then gets pushed into some of the MySQL RDS instance or an Oracle express edition in the tables, so that I could run some SQL queries to slice and dice this data as per my requirement. Selecting specific columns, filtering with where clause, grouping with Group by and various other SQL native features bring this AWS data to life. But as you can already imagine, this is lot of work and extra overhead not to mention the cost.

This is where Steampipe enters into the picture. As per Steampipe, Steampipe exposes APIs and services as a high-performance relational database, giving you the ability to write SQL-based queries to explore dynamic data. Mods extend Steampipe's capabilities with dashboards, reports, and controls built with simple HCL.




It's sort of live database for which you don't have to create any database infrastructure. It uses your native credentials for AWS and their plugin does the magic. One thing I have encountered while using it is that it starts lagging when there is multi-account scenario is in play and you would face few issues especially at the data retrieved volume increases. But for starters and single account, it works like the charm and you can rely on good old Structured query language.




Categories: DBA Blogs

HashiCorp Certified: Teraform Associate 2022

Sat, 2022-11-19 23:31

 


Categories: DBA Blogs

K9s vs K8s Difference Explained

Tue, 2022-11-08 00:39

 If you are wondering what is the difference between K8s and K9s then it's not a big deal. Its not that someone has added another letter or another node to Kubernetes. Here is K9s vs K8s difference explained:

K8s is just an abbreviation of Kubernetes. It's simply counting the 8 letters between k and s of word Kubernetes. 

K9s is is a terminal-based user interface which makes it easier to manage K8s.

That's all there is to it.

K9s is simply a Kubernetes CLI To Manage Your Clusters In Style. It's open source as well and its repo is here.


Kubernetes standard definition goes something like this, "Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and management."

And K9s is simply yet another K8s tool. Someone was joking at social media that due to inflation k8s is now k9s. K9s is actually pretty cool though. k9s is straight up one of the best tools to use alongside a cluster. 

As per K9s repo, K9s provides a terminal UI to interact with your Kubernetes clusters. The aim of this project is to make it easier to navigate, observe and manage your applications in the wild. K9s continually watches Kubernetes for changes and offers subsequent commands to interact with your observed resources.

The installation of K9s is pretty straight forward but make sure that you do the pre-requisites properly. K9s uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly. In order to issue manifest edit commands make sure your EDITOR env is set. K9s prefers recent kubernetes versions ie 1.16+. 

Enjoy.

Categories: DBA Blogs

Thanks Oracle for Oracle ACE Pro Award 2022

Sun, 2022-11-06 15:48



 

Categories: DBA Blogs

AWS EKS Error repository does not exist or may require 'docker login' Solution

Thu, 2022-10-06 19:18

 If your POD is in Waiting state on your EKS cluster, and you have checked the logs of POD and determine that the container in the Waiting state is scheduled on a worker node (for example, an EC2 instance), but can't run on that node, then its time to check if you are pulling right docker image from right repository or not. 

So the first step is to make sure that the image and repository name is correct by logging into Amazon Elastic Container Registry (Amazon ECR), or another container image repository as per your use case. Then compare the repository or image from the repository with the repository or image name specified in the pod specification.


Login to your worker node in EKS, and run following command to make sure that you can successfully pull the image on your node:


docker pull nginx:latest


If you're using Amazon ECR, then verify that the repository policy allows image pull for the NodeInstanceRole. Or, verify that the AmazonEC2ContainerRegistryReadOnly role is attached to the policy. If this is not the case then you might be getting following error when you describe the pod:


Failed to pull image "nginx": rpc error: code = Unknown desc = Error response from daemon: pull access denied for nginx, repository does not exist or may require 'docker login'


Categories: DBA Blogs

AWS EC2 Amazon Linux nginx: [emerg] unknown directive "stream"

Fri, 2022-09-09 01:15

 Creating a reverse proxy for servers in restricted mode using NGINX in AWS EC2 Amazon Linux 2 can be frustrating. The following error is one of those quirks:

[root@ip-10-219-0-153 nginx]# nginx -t

nginx: [emerg] "server" directive is not allowed here in /etc/nginx/nginx.conf:9

nginx: configuration file /etc/nginx/nginx.conf test failed

Solution:

Here is the quick easy solution: Just install  nginx-mod-stream and  you should be fine.

[root@ip-10-219-0-153 nginx]# yum install nginx-mod-stream

Loaded plugins: extras_suggestions, langpacks, priorities, update-motd

amzn2-core                                                                                 | 3.7 kB  00:00:00

Resolving Dependencies

--> Running transaction check

---> Package nginx-mod-stream.x86_64 1:1.20.0-2.amzn2.0.5 will be installed

--> Finished Dependency Resolution


Dependencies Resolved


==================================================================================================================

 Package                     Arch              Version                         Repository                    Size

==================================================================================================================

Installing:

 nginx-mod-stream            x86_64            1:1.20.0-2.amzn2.0.5            amzn2extra-nginx1             87 k


Transaction Summary

==================================================================================================================

Install  1 Package


Total download size: 87 k

Installed size: 172 k

Is this ok [y/d/N]: y

Downloading packages:

nginx-mod-stream-1.20.0-2.amzn2.0.5.x86_64.rpm                                             |  87 kB  00:00:00

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

  Installing : 1:nginx-mod-stream-1.20.0-2.amzn2.0.5.x86_64                                                   1/1

  Verifying  : 1:nginx-mod-stream-1.20.0-2.amzn2.0.5.x86_64                                                   1/1


Installed:

  nginx-mod-stream.x86_64 1:1.20.0-2.amzn2.0.5


Complete!

[root@ip-10-219-0-153 nginx]# nginx -t

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful


Hope that helps.

Categories: DBA Blogs

Changing SPF Record in AWS Route53

Wed, 2022-09-07 00:33

 SPF (Sender Policy Framework) is an email authentication protocol that checks the senders IP address against a list of IP’s located on the domain listed in the Return Path of the email. This list is known as the SPF record.

On AWS Route53 DNS, you an add SPF record as TXT record. Back in the days, you would have to add multiple IPs in your TXT record in the Route53 which soon becomes way too large and unmanageable. Then you start hitting the limits. The best way is to get a dynamic SPF record from your email security provider and then use that one in the DNS. 

When you make the change, you might notice SoftFail in results when you verify that SPF record. SPF Failure occurs when the senders IP address is not found in the SPF record. This can mean the email is sent to spam or discarded altogether. You can ignore the SoftFail for now. If everything is fine, then you would receive something like following from the MX Tool SPF checker.

SPF Record Published SPF Record found

Status Ok SPF Record Deprecated No deprecated records found

Status Ok SPF Multiple Records Less than two records found

Status Ok SPF Contains characters after ALL No items after 'ALL'.

Status Ok SPF Syntax Check The record is valid

Status Ok SPF Included Lookups Number of included lookups is OK

Status Ok SPF Type PTR Check No type PTR found

Status Ok SPF Void Lookups Number of void lookups is OK

Status Ok SPF MX Resource Records Number of MX Resource Records is OK

Status Ok SPF Record Null Value No Null DNS Lookups found


Hope that helps.

Categories: DBA Blogs

AWS EKS Kubernetes Dashboard Token and Deployment

Fri, 2022-06-17 00:42

 Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself.


To deploy Dashboard, execute following command:


kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml



Since this is deployed to our private cluster, we need to access it via a proxy. kube-proxy is available to proxy our requests to the dashboard service. In your workspace, run the following command:


kubectl proxy --port=8080 --address=0.0.0.0 --disable-filter=true &


This will start the proxy, listen on port 8080, listen on all interfaces, and will disable the filtering of non-localhost requests.


This command will continue to run in the background of the current terminal’s session.


$ kubectl describe secret kubernetes-dashboard-token-8cz55 -n kubernetes-dashboard

Name:         kubernetes-dashboard-token-8cz55

Namespace:    kubernetes-dashboard

Labels:       <none>

Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard

              kubernetes.io/service-account.uid: e54ec9ae-5f25-46ac-9ac6-c8e0a4f2901e


Type:  kubernetes.io/service-account-token


Data

====

ca.crt:     1066 bytes

namespace:  20 bytes

token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImNGMlZGZ0NISFBFUmlMRTVyTV9fTi00Sl9BbjYwdTgxY2FJeVRScGJTdUUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQ

Categories: DBA Blogs

AWS Config to Rescue

Tue, 2022-05-10 00:41

 AWS Config is a service for assessment, audit, and evaluation of the configuration of resources in your account. You can monitor and review changes in resource configuration using automation against a desired configuration. The newly expanded set of types includes resources from Amazon SageMaker, Elastic Load Balancing, AWS Batch, AWS Step Functions, AWS Identity and Access Management (IAM), and more.


AWS Config continually assesses, audits, and evaluates the configurations and relationships of your resources.


Discover resources that exist in your account, record their configurations, and capture any changes, allowing you to quickly troubleshoot operational issues. Codify your compliance requirements as AWS Config rules and author remediation actions, automating the assessment of your resource configurations across your organization. Evaluate resource configurations for potential vulnerabilities, and review your configuration history after potential incidents to examine your security posture.


AWS Config enables you to record software configuration changes within your Amazon EC2 instances and servers running on-premises, as well as servers and Virtual Machines in environments provided by other cloud providers. With AWS Config, you gain visibility into operating system (OS) configurations, system-level updates, installed applications, network configuration and more. AWS Config also provides a history of OS and system-level configuration changes alongside infrastructure configuration changes recorded for EC2 instances.


AWS Config provides you with pre-built rules for evaluating provisioning and configuring of your AWS resources as well as software within managed instances, including Amazon EC2 instances and servers running on-premises. You can customize pre-built rules to evaluate your AWS resource configurations and configuration changes, or create your own custom rules in AWS Lambda that define your internal best practices and guidelines for resource configurations. Using AWS Config, you can assess your resource configurations and resource changes for compliance against the built-in or custom rules.


Conformance packs also provide compliance scores. A compliance score is a percentage-based score that helps you quickly discern the level to which your resources are compliant for a set of requirements that are captured within the scope of a conformance pack. A compliance score is calculated based on the number of rule-to-resource combinations that are compliant within the scope of a conformance pack. For example, a conformance pack with 5 rules applying to 5 resources has 25 (5x5) possible rule-resource combinations. If 2 resources are not compliant with 2 rules, the compliance score would be 84%, indicating that 21 out of 25 rule-resource combinations are currently in compliance. 


Categories: DBA Blogs

3 Reasons Why Oracle Cloud is Most Environment Friendly

Thu, 2022-04-21 22:28

 Sustainable cloud positions companies to deliver on new commitments: carbon reduction and responsible innovation. Companies have historically driven financial, security, and agility benefits through cloud, but sustainability is becoming an imperative.


44% of CEOs in the United Nations Global Compact - Accenture Strategy CEO Study on Sustainability see a net-zero future for their company in the next ten years.


4.7x Between 2013-2019, companies with consistently high environmental, social and governance (ESG) performance enjoyed 4.7x higher operating margins and lower volatility than low ESG performers over the same period.


30-40% Migrations to public cloud result in up to 30-40% total cost of ownership (TCO) savings.


Drivers like greater workload flexibility, better server utilization rates, and more energy-efficient infrastructure all make public clouds more cost efficient than enterprise-owned data centers.


Here are the top 3 reasons:


1. Better Infrastructure

Oracle cloud centers are typically located closer to the facilities that power them to prevent large losses during the process of transmitting electrical energy over long distances. Traditional data centers usually don’t have a choice for location unless the company that builds it has tons of money, like Facebook or Yahoo.


2. Higher Utilization Rate

Oracle cloud consolidates machine use, operating servers at high utilization rates, increasing efficiency. When hardware sits idle (the usual case in private data centers), it creates poor efficiency and has negative effects on the environment. 


3. Reduced Electricity Use

Traditional data hardware systems are high maintenance, requiring uninterruptible power supplies, cooling, and tons of electricity. Moving basic software programs to the Oracle cloud can save electricity immensely.


Categories: DBA Blogs

Cloud data lake house

Tue, 2022-04-12 21:44

 A data lake can help you break down data silos and combine different types of analytics into a centralized repository. You can store all of your structured and unstructured data in this repository. However, setting up and managing data lakes involve a lot of manual, complicated, and time-consuming tasks.

We’re seeing the use of data analytics expanding among new audiences within organizations, for example with users like developers and line of business analysts who don’t have the expertise or the time to manage a traditional data warehouse. Also, some customers have variable workloads with unpredictable spikes, and it can be very difficult for them to constantly manage capacity.

A data lake is a place to store your structured and unstructured data, as well as a method for organizing large volumes of highly diverse data from diverse sources. Data lakes are becoming increasingly important as people, especially in business and technology, want to perform broad data exploration and discovery. Bringing data together into a single place or most of it in a single place makes that simpler.

A data lakehouse can be defined as a modern data platform built from a combination of a data lake and a data warehouse. A data lakehouse takes the flexible storage of unstructured data from a data lake and the management features and tools from data warehouses, then strategically implements them together as a larger system.   

Oracle Cloud Infrastructure Data Integration is a fully managed, serverless, cloud-native service that extracts, loads, transforms, cleanses, and reshapes data from a variety of data sources into target Oracle Cloud Infrastructure services, such as Autonomous Data Warehouse and Oracle Cloud Infrastructure Object Storage. 


Categories: DBA Blogs

OCI Vulnerability Scanning Service (VSS) and Oracle Cloud Guard

Tue, 2022-04-12 21:38

 The Cloud Native Computing Foundation reported that over 92% of firms are using containers in production in 2020, up from 23% in 2016. The need to innovate faster and shift to cloud-native application architectures isn’t just driving complexity, it’s creating significant vulnerability blind spots.


Oracle has a new Oracle Cloud Guard detector for container image scanning. Customers can set the risk level for container images in the new Cloud Guard detector. The image findings are collected by the detector and then will become a container image ‘problem’ in Cloud Guard. This additional feature is great for the users that normally do not use the VSS or OCIR consoles to check the status of their container images. Cloud Guard will alert users when VSS detects container images with high risk vulnerabilities so that everyone will know that a development team needs to address the issues quickly.


Container security is the process of implementing tools and policies to ensure that container infrastructure, apps, and other container components are protected. Linux containers allow both developers and IT operations to create a portable, lightweight, and self-sufficient environment for every application. However, securing containerized environments is a significant concern for Dev/Sec/Ops teams.


Unfortunately, container security is much more difficult to achieve than security for more traditional compute platforms, such as virtual machines or bare metal hosts.


A container is a standalone file or package of software files with everything you need to run an application. The application’s code, dependencies, library, runtime, and system tools are all “contained” within the container. As a result, containers have made the process of developing an application faster, simpler, and more powerful than ever.


To reduce an application’s attack surface, developers need to remove any components that aren’t needed. Use scripts to configure hosts properly based on the CIS benchmarks. Although legacy SCA and SAST tools can be slow and cumbersome to use, many have been evolving in recent years to support DevOps initiatives and automation, and they are still an important part of container security. 


Categories: DBA Blogs

OCI AI services and NVIDIA

Tue, 2022-04-12 21:32

 Make accurate predictions, get deeper insights from your data, reduce operational overhead, and improve customer experience with Oracle AI and ML services . Oracle cloud helps you at every stage of your ML adoption journey with the most comprehensive set of artificial intelligence (AI) and ML services, infrastructure, and implementation resources.

Improve your business outcomes with ready-made intelligence for your applications and workflows. From computer vision to automated data extraction and from analysis to control quality, Oracle has all kinds of offerings.

Oracle has teamed up with NVIDIA accelerated computing to demo HPC applications. Oracle has pretrained ML services that can be seamlessly integrated by developers, data scientists, and even business analysts directly.

OCI AI services are a collection of prebuilt machine learning models that developers can easily add to their applications and business operations. These models, which you can further custom train on an organization’s own business data, are pretrained on industry data, which helps them deliver more accurate results. Developers can now focus on accelerating application development without needing data science experience.

The following services all run directly on NVIDIA GPU offerings in OCI:

  • Oracle Digital Assistant
  • OCI Language
  • OCI Vision
  • OCI Speech
  • OCI Anomaly Detection
  • OCI Forecasting

Categories: DBA Blogs

What to Expect from First Oracle CloudWorld

Tue, 2022-04-12 21:27

Oracle OpenWorld has become CloudWorld. So what to expect from first Oracle Cloudworld, the premier Oracle event of the year in glittering Las Vegas.

From October 16-20, 2022, Oracle is all set to dazzle the world. From IT administrators to business professionals, CloudWorld attendees will learn from organizations of all sizes and industries as they share how they’ve used Oracle Cloud to modernize their IT operations.



Just like it's other solution, Oracle's aim is to cover the whole business fleet of any kind. Oracle Cloud want to be a cloud with solutions for every type of industry including but not limited to finance, supply chain, customer experience, human resources, industries, and more.

If there is one even this year you want to attend, then that should be Oracle Cloudworld.


Categories: DBA Blogs

Oracle Cloud Functions in Python Example

Fri, 2022-04-01 22:36

Oracle Functions is a fully managed, multitenant, highly scalable, on-demand, functions-as-a-service platform. It’s built on enterprise-grade Oracle Cloud Infrastructure (OCI) and powered by the Fn Project open source engine. The serverless and elastic architecture of Oracle Functions means you have no infrastructure administration or software administration to perform. 


One common requirement is to connect to Oracle database through Oracle functions. You can use following Python code chunk to accomplish that:


def sqlconnect(user, passwd, dsn, query):

    conn = cx_Oracle.connect(user=user, password=passwd, dsn=dsn)

    cursor = conn.cursor()

    cursor.execute(query)

    return cursor.fetchall()


if __name__ == '__main__':

    host = "remoteserver"

    service_name = "test"

    conn_string = f"{host}/{service_name}"

    sqlconnect("user", "password", conn_string, "select sysdate from dual")

Categories: DBA Blogs

ORA-00902: invalid datatype Issue

Wed, 2022-03-30 22:45

 I have an Oracle package with procedures to randomize and reset passwords for our SDE/GIS schemas. These procedures work perfectly via SQL Plus command line commands. My package looks like this:

create or replace package gis_pass_pkg as

TYPE schema_name_var IS TABLE OF VARCHAR2(1000);

procedure randomize_pass(schema_name in schema_name_var); procedure reset_pass(schema_name in schema_name_var); end gis_pass_pkg; / Procedures omitted for brevity


Here is the error I receive when executing through the toolbox script:


Traceback (most recent call last): File "T:\DataCenter\Citrix\AppData01\clhays\Application Data\ESRI\Desktop10.2\ArcToolbox\My Toolboxes\SDE Manager Scripts\ResetPasswordsViaPackage.py", line 53, in sysConn.execute(SQLexe) File "c:\arcgis\desktop10.2\arcpy\arcpy\arcobjects\arcobjects.py", line 27, in execute return convertArcObjectToPythonObject(self._arc_object.Execute(*gp_fixargs(args))) AttributeError: ArcSDESQLExecute: SreamExecute ArcSDE Extended error 902 ORA-00902: invalid datatype


Failed to execute (resetpasswords).


I am assuming that the Oracle error of "ORA-00902: invalid datatype" is related to configuration of the procedure call from the toolbox.


The response in particular included the following code:


declare a dbms_utility.uncl_array;


len  pls_integer;   

begin


dbms_utility.comma_to_table('One,Two,Three,Four', len, a);


for i in 1..a.count loop

  dbms_output.put_line( a(i) );

end loop;   end;   /

Categories: DBA Blogs

Oracle Cloud Takes Sustainability Very Seriously

Thu, 2022-03-17 21:23

 As per Oracle's own announcement, "Oracle has set a target to achieve net zero emissions by 2050, and to halve the greenhouse gas emissions across our operations and supply chain by 2030, relative to a 2020 baseline. This target has been approved by the Exponential Roadmap Initiative, an accredited partner of the United Nations Race to Zero."

When building cloud workloads, the practice of sustainability is understanding the impacts of the services used, quantifying impacts through the entire workload lifecycle, and applying design principles and best practices to reduce these impacts. Oracle is responsible for optimizing the sustainability of the cloud – delivering efficient, shared infrastructure, water stewardship, and sourcing renewable power.


Oracle cloud has a lower carbon footprint and are more energy efficient than typical on-premises alternatives because they invest in efficient power and cooling technology, operate energy efficient server populations, and achieve high server utilization rates. Cloud workloads reduce impact by taking advantage of shared resources, such as networking, power, cooling, and physical facilities. You can migrate your cloud workloads to more efficient technologies as they become available and use cloud-based services to transform your workloads for better sustainability.


Sustainability in the cloud is a continuous effort focused primarily on energy reduction and efficiency across all components of a workload by achieving the maximum benefit from the resources provisioned and minimizing the total resources required. This effort can range from the initial selection of an efficient programming language, adoption of modern algorithms, use of efficient data storage techniques, deploying to correctly sized and efficient compute infrastructure, and minimizing requirements for high-powered end-user hardware.


In FY20, Oracle added the Oracle Austin Waterfront campus to their list of buildings with 100% renewable electricity use. They now have more than 80 offices globally with electric vehicle charging stations to help meet employees’ needs.

Categories: DBA Blogs

Knowing ETA for Parallel Queries in Oracle

Sat, 2022-03-12 21:31

 While running heavy, long-running and critical  production queries on the production Oracle databases especially in the cloud, knowing an approx ETA is a must. This is specially true in the case of parallel queries on large datasets.


Following query can help to ascertain that ETA:

col sid for 999999

col QC_SID for 999999

col QC_INST for 9

col username for a10

col operation_name for a20

col target for a20

col units for a10

col start_time for a18

Select

px.sid,

decode(px.qcinst_id,NULL,username,

' - '||lower(substr(pp.SERVER_NAME,

length(pp.SERVER_NAME)-4,4) ) )"Username",

--decode(px.qcinst_id,NULL, 'QC', '(Slave)') "QC/Slave" ,

--to_char( px.server_set) "SlaveSet",

--to_char(px.inst_id) "Slave INST",

substr(opname,1,30)  operation_name,

substr(target,1,30) target,

sofar,

totalwork,

decode(totalwork,0,0,round(sofar/totalwork*100)) pct_done,

units,

start_time,

round(totalwork/(sofar/((sysdate - start_time)*1440))) eta_min,

decode(px.qcinst_id, NULL ,s.sid ,px.qcsid) QC_SID,

px.qcinst_id QC_INST

from gv$px_session px,

gv$px_process pp,

gv$session_longops s

where px.sid=s.sid

and px.serial#=s.serial#

and px.inst_id = s.inst_id

and px.sid = pp.sid (+)

and px.serial#=pp.serial#(+)

and sofar <> totalwork

order by

  decode(px.QCINST_ID,  NULL, px.INST_ID,  px.QCINST_ID),

  px.QCSID,

  decode(px.SERVER_GROUP, NULL, 0, px.SERVER_GROUP),

  px.SERVER_SET,

  px.INST_ID

/


Categories: DBA Blogs

Pages