Pakistan's First Oracle Blog
Most Underappreciated AWS Service and Why
Who wants to mention in their resume that one of their operation task is to tag the cloud resources? Well I did and mentioned that one of the tools I used for that purpose was Tag Editor. Interviewer was surprised to learn that there was such a thing in AWS which allowed tagging multiple resource at once. I got the job due to this most under-appreciated and largely unknown service.
Tagging is boring but essential. As cloud matures, tagging is fast becoming an integral part of it. In the environments I manage, most of tagging management is automated but there is still a requirement at times for manual bulk tagging and that's where Tag Editor comes very handy. Besides of bulk tagging Tag Editor enables you to search for the resources that you want to tag, and then manage tags for the resources in your search results.
There are various other tools available from AWS to ensure tag compliance and management but the reason why I like Tag Editor most is its ease of use and a single pane of window to search resources by tag keys, tag values, region or resource types. It's not as glamorous as AWS Monitron, AWS Proton or AWS Fargate but as useful as any other service is.
In our environment, if its not tagged then its not allowed in the cloud. Tag Editor addresses the basics of being in cloud. Get it right, and you are well on your way to well-architected cloud infrastructure.
From DBA to DBI
Recently Pradeep Parmer at AWS had a blog post about transitioning from DBA to DBI or in other words from database administrator to database innovator. I wonder what exactly is the difference here as any DBA worth his or her salt is an innovator in itself.
Administering a database is not about sleepily issuing backup commands or in terms of Cloud managed databases clicking here and there. Database administration has evolved over time just like other IT roles and is totally different what it was few years back.
Regardless of the database engine you use, you have to have a breadth of knowledge about operating systems, networking, automation, scripting, on top of database concepts. With managed database services in cloud like AWS RDS or GCP Cloud SQL or Big Query many of the skills have become outdated but new ones have sprung up. That has always been the case with DBA field.
Taking the example of Oracle; what we were doing in Oracle 8i became obsolete in Oracle 11g and Oracle 19c is a totally different beast. Oracle Exadata, RAC, various types of DR services, fusion middleware are in itself a new ballgame with every version.
Even with managed database services, the role of DBA has become more involved in terms of migrations and then optimizing what's running within the databases from stopping the database costs going through the roof.
So the point here is that DBAs have always been innovators. They have always been trying to find out new ways to automate the management and healing of their databases. They always are under the pressure to eke out last possible optimization out of their system and that's still the case even if those databases are supposedly managed by cloud providers.
With purpose built databases which are addressed different use case for different database technology the role of DBA has only become more relevant as they have to evolve to address all this graph, in-memory, and other cool nifty types of databases.
We have always been innovators my friend.
What is Purpose Built Database
In simple words, a general Database Engine is a big clunky piece of software with features for all the use cases, and its up to you to choose which features to use. Whereas in a purpose built database, you get a lean, specific database which is only suitable for the feature you want.
For instance, AWS offers 15 purpose-built database engines including relational, key-value, document, in-memory, graph, time series, and ledger databases. GCP also provides multiple databases types like Spanner, BigQuery etc.
But the thing is that the one-size-fits-all monolithic databases aren't going anywhere. They are here to stay. A medium to large organization has way too many requirements and features to be used and having one database for every use case increases the footprint and cost. For every production database, there is a dev, test, and QA database so the foot print keeps increasing.
So the thing is that though having purpose built database notion is great it's not going to throw monilithic database out of the window. It just provides another option for the organization and they could just have a managed service for purpose built database for a specialized use case but for a general database requirement for OLTP and data warehouse, monilithic is still the way.
5 Important Steps Before Upgrading Oracle on AWS RDS
Even though AWS RDS (relational database service) is a managed service which means that you won't have to worry about upgrades, patches and other tidbits, you still have the option of manually triggering the upgrade at time of your choice.
Upgrading an Oracle database is quite critical not only for the database itself but more importantly for the dependent applications. It's very important to try out any upgrade on RDS on a test representative system before hand to iron out any wrinkles and check the timings and any other potential issues.
There are 5 important steps before upgrading Oracle on AWS RDS you can take to make this process more risk-free, speedy, and reliable:
- Check Invalid objects such as procedures, functions, packages etc in your database.
- Make a list of the objects which are still invalid and if possible delete them to remove clutter.
- Disable and remove audit logs if they are stored in database
- Convert dbms_jobs Jobs and other stuff to dbms_scheduler
- Take Snapshot of your production database right before you upgrade to speed up the upgrade process as then during upgrade only delta snapshot will be taken
Choice State in AWS Step Functions
Richly asynchronous server-less applications can be built by using AWS step functions. Choice State in AWS Step Functions is the newest feature which was long awaited.
In simply words, we define steps and their transitions and call it State Machine as a whole. In order to define this state machine, we use Amazon States Language (ASL). ASL is a JSON-based structured language that defines state machines and collections of states that can perform work (Task states), determines which state to transition to next (Choice state), and stops execution on error (Fail state).
So if the requirement is to add a branching logic like if-then-else or case statement in our state transition, then Choice state comes handy. The choice state introduces various new operators into the ASL and the sky is now limit with the possibilities. Operators for choice state include comparison operators like Is Null, IsString etc, Existence operators like Ispresent, glob wildcards where you match some string and also variable string comparison.
Choice State enables developers to simplify existing definitions or add dynamic behavior within state machine definitions. This makes it easier to orchestrate multiple AWS services to accomplish tasks. Modelling complex workflows with extended logic is now possible with this new feature.
Now one hopes that AWS introduces doing it all graphically instead of dabbling into ASL.
CloudFormation Template for IAM Role with Inline Poicy
AWSTemplateFormatVersion: 2010-09-09
Parameters:
vTableName:
Type: String
Description: the tablename
Default: arn:aws:dynamodb:ap-southeast-2:1234567:table/test-table
vUserName:
Type: String
Description: New account username
Default: mytestuser
Resources:
DynamoRoleForTest:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
AWS:
- !Sub 'arn:aws:iam::${AWS::AccountId}:user/${vUserName}'
Action:
- sts:AssumeRole
Path: /
Policies:
- PolicyName: DynamoPolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- dynamodb:BatchGet*
- dynamodb:DescribeStream
- dynamodb:DescribeTable
- dynamodb:Get*
- dynamodb:Query
- dynamodb:Scan
Resource: !Ref vTableName
I hope that helps. Thanks.
How to Read Docker Inspect Output
Here is quick easy set of instructions as how to read docker inspect output:
First you run the command:
docker inspect <image id> or <container id>
and then it outputs in JSON format. Your normally are interested in what exactly is in this docker image which you have just pulled from web or inherited in your new job.
Now copy this JSON output and put it in VSCode or any of online JSON editor of your choice. For a quick glance, look at the node "ContainerConfig." This node tells you what exactly was run within the temporary container which was used to build this image such as CMD, EntryPoint etc.
In addition to the above, following is the description of all the important bits of information found in Inspect command output:
- ID: It's unique identifier of the image.
- Parent: A link to the identifier of the parent image of this image.
- Container: The temporary container created when the image was built.
- ContainerConfig: Contains what happened in that temporary container.
- DockerVersion: Version of Docker used to create the image
Virtual Size: Image size in bytes.
I hope that helps.
Installing Docker on Amazon Linux 2
Quick Intro to BOTO3
I just published my very first tutorial video on youtube which lists down a quick introduction to AWS BOTO3 with a step by step walkthrough of a simple program. Please feel free to subscribe to my channel. Thanks. You can find video here.
Checklist While Troubleshooting Workload Errors in Kubernetes
Following is the checklist while troubleshooting workload/application errors in Kubernetes:
1- First check how many nodes are there
2- What namespaces are present
3- In which namespace , the faulty application is
4- Now check faulty app belongs to which deployment
5- Now check which replicaset (if any) is party of that deployment
6- Then check which pods are part of that replicaset
7- Then check which services are part of that namespace
8- Then check which service correspond to the deployment where our faulty application is
9- Then make sure label selectors in deployment to pod template are correct
10- Then ensure label selector in service to deployment are correct.
11- Then check that servicename if referred in any deployment is correct. For example, webserver pod is referring to database host (which will be the servicename of database) in env of pod template is correct.
12- Then check that ports are correct in clusterIP or nodeport services.
13- Check if the status of pod is running
14- check logs of pods and containers
I hope that helps and feel free to add any step or thought in the comments. Thanks.
Different Ways to Access Oracle Cloud Infrastructure
- SDK for Java
- SDK for Python
- SDK for TypeScript and JavaScript
- SDK for .NET
- SDK for Go
- SDK for Ruby
Oracle 11g on AWS RDS Will Be Force Upgraded in Coming Months
Oracle Cloud's Beefed Up Security
Oracle Cloud for Existing Oracle Workloads
To help manage this ever-growing complexity, organizations need to select a cloud solution which is similar to their existing on-prem environments. Almost all the serious enterprise outfits are running some sort of Oracle workload and it only makes sense for them to select Oracle cloud in order to leverage what they already know in a better and modern way. And they can utilize this architecture best practices to help you build and deliver great solutions.
Cost management, operational excellence, performance efficiency, reliability, and security are hallmarks of Oracle cloud plus some more. Oracle databases are already getting complex and autonomous. They are now harder to manage and that is why it only make sense to migrate them over to the Oracle cloud and let Oracle handle all the nitty gritty.
Designing and deploying a successful workload in any environment can be challenging. This is especially true as agile development and DevOps/SRE practices begin to shift responsibility for security, operations, and cost management from centralized teams to the workload owner. This transition empowers workload owners to innovate at a much higher velocity than they could achieve in a traditional data center, but it creates a broader surface area of topics that they need to understand to produce a secure, reliable, performant, and cost-effective solution.
Every company is on a unique cloud journey, but the core of Oracle is same.
ADB-ExaC@C ? What in the Heck Oracle Autonomous Database is?
Since Oracle 10g, we have been hearing about self-managed, self-healing, and self-everything Oracle database. Oracle 10g was touted as self-healing one and if you have managed Oracle 7,8i,9i, this was infarct true how much pain 10g had taken away.
With everything moving over to cloud, is that still the case? Or in other words, with this autonomous band wagon of Oracle plus their cloud offerings, is autonomous database a reality now?
So what in the heck Oracle autonomous database is? Autonomous Database delivers a machine-learning driven, self-managed database capability that natively builds in Oracle’s extensive technology stack and best practices for self-driving, self-securing and self-repairing operation.
Oracle says that their Autonomous Database is completely self-managed, allowing you to focus on business innovations instead of technology and is consumed in a true pay-per-use subscription model to lower operational cost. Yes, we have heard almost similar claims with previous versions, but one main difference here is that this one is in the cloud.
Well, if you have opted for Exadata in Oracle's cloud then its true up to a great extent. Oracle Autonomous Database on Exadata Cloud@Customer (ADB-ExaC@C) is here and as Oracle would be managing it, you wouldn't have to worry about its management. But if its autonomous why would anyone including Oracle would manage it? Shouldn't it be managing itself?
So this autonomous ADB-ExaC@C provides you something Architectural Identicality which can be easily achived by anything non-autonomous. They say its elastic as it can auto scale up and down. I think AWS Aurora, GCP Big Query is doing that for some time now. Security patching, upgrades, backups, are all behind the scene and automated for this ADB-ExaC@C. I am still at loss as what really makes it autonomous here.
Don't get me wrong. I am huge fan of Exadata despite of its blood-curdling price. Putting Exadata in Cloud and offering it as a service is a great idea too as this would enable many businesses to use it. My question is simple: ADB-ExaC@C is a managed service for sure, but what makes it autonomous?
What's Different About Oracle's Cloud
In the words of Larry Ellison, "The main economic benefit of Oracle’s Gen 2 Cloud Infrastructure is its autonomous capability, which eliminates human labor for administrative tasks and thus reduces human error. That capability is particularly important in helping prevent data theft against increasingly sophisticated, automated hacks."
But with an outdated, overly complex ERP system, the organization found it a challenge to efficiently provide financial information. For one thing, its heavily manual processes resulted in a lack of confidence in data, making it hard to drive productivity and service improvements. By insisting on zero customization of its Oracle Cloud applications, organizatons across the world ensure that regular updates are simple and that its processes are integrated and scalable. As a result, the utility has shortened its order lead times significantly, reduced customer complaints, and boosted overall customer satisfaction levels.
Oracle’s second-generation cloud offers autonomous operations that eliminate human error and provide maximum security, all while delivering truly elastic and serverless services with the highest performance—available globally both in the public cloud and your data centers.
ERP in Oracle's Cloud
Many companies have started to migrate to Cloud. Zoom selects Oracle as a cloud infrastructure provider for its core online meeting service. Zoom deploys Oracle Cloud within hours; enables millions of meeting participants within weeks.
Oracle Cloud is a Generation 2 enterprise cloud that delivers powerful compute and networking performance and includes a comprehensive portfolio of infrastructure and platform cloud services. Built from the ground up to meet the needs of mission-critical applications, Oracle Cloud supports all legacy workloads while delivering modern cloud development tools, enabling enterprises to bring their past forward as they build their future.
Oracle's generation 2 Cloud is the only one built to run Oracle Autonomous Database, the industry's first and only self-driving database. Oracle Cloud offers a comprehensive cloud computing portfolio, from application development and business analytics to data management, integration, security, artificial intelligence (AI), and blockchain. Oracle customers are using Oracle Autonomous Database to transform their businesses by redefining database management through machine learning and automation.
Reduce operational costs by up to 90% with a multimodel converged database and machine learning-based automation for full lifecycle management. Oracle Autonomous Database runs natively on Oracle Cloud Infrastructure while providing workload-optimized cloud services for transaction processing and data warehousing. Oracle Database is the market leader and ranks #1 in the 2019 Gartner Critical Capabilities for Operational Database Management Systems report.
You can protect sensitive and regulated data automatically, patch your database for security vulnerabilities, and prevent unauthorized access—all with Oracle Autonomous Database. You can detect and protect from system failures and user errors automatically and provide failover to standby databases with zero data loss. Autonomous Data Warehouse is a cloud database service optimized for analytical processing. It automatically scales compute and storage, delivers fast query performance, and requires no database administration.
Database Management in Oracle Cloud
Oracle Autonomous Data Warehouse Cloud Service is a fully automated, high-performance, and elastic service. You will have all of the performance of market-leading Oracle Database in a fully automated environment that is tuned and optimized for data warehouse workloads.
Autonomous Transaction Processing
Oracle Autonomous Transaction Processing is a fully automated database service tuned and optimized for transaction processing or mixed workloads with the market-leading performance of Oracle Database. The service delivers a self-driving, self-securing, self-repairing database service that can instantly scale to meet demands of mission-critical applications.
Database Cloud Service: Bare Metal
The dense I/O configuration consists of a single Oracle 11g, 12c, or 18c Database instance on 2 OCPUs, with the ability to dynamically scale up to 52 OCPUs without downtime. Available storage configurations range from 5.4 to 51.2 TB of NVMe SSD local storage, with 2- and 3-way mirroring options available.
Database Cloud Service: Virtual Machine
The virtual machine configurations consists of a single Oracle 11g, 12c, or 18c Database instance. Choose from a single OCPU virtual machine with 15 GB of RAM up to a RAC-enabled virtual machine with 48 OCPUs with over 600 GB of RAM. Storage configurations range from 256 GB to 40 TB.
Exadata Cloud Service
Oracle Exadata Cloud Service enables you to run Oracle Databases in the cloud with the same extreme performance and availability experienced by thousands of organizations which have deployed Oracle Exadata on premise. Oracle Exadata Cloud Service offers a range of dedicated Exadata shapes.
Exadata Cloud@Customer
Oracle Exadata Cloud@Customer is a unique solution that delivers integrated Oracle Exadata hardware and Oracle Cloud Infrastructure software in your data center with Oracle Exadata infrastructure managed by Oracle experts. Oracle Exadata Cloud@Customer is ideal for customers who desire cloud benefits but cannot yet move their databases to the public cloud.
Oracle NoSQL Database Cloud Service
A NoSQL Database Cloud Service with on-demand throughput and storage-based provisioning that supports document, columnar, and key-value data models, all with flexible transaction guarantees.
Oracle MySQL Database Service
MySQL Database Service is a fully managed database service that enables organizations to deploy cloud native database applications using the world’s most popular open source database. It is 100% developed, managed, and supported by the MySQL Team.
ORA-1652
select sql_id, sum(temp_space_allocated)/1024/1024
from dba_hist_active_sess_history
where sample_time between timestamp '2020-06-25 19:30:00' and timestamp '2020-06-25 20:00:00'
group by sql_id
order by 2 desc;
Also check out the control real-time monitoring of the sessions and v$tempseg_usage, maybe some other query who is using temp and rapidly filling it.
Also check this view
V$TEMPSEG_USAGE
which describes temporary segment usage.FiveTran Auth Error in Connecting Oracle RDS Database
I was trying to connect from fivetran to Oracle database hosted on AWS RDS and was getting Auth error while connecting through ssh tunnel. I was following fivetran instructions to create ssh user, give its rights set up the keys etc but even then Auth error was appearing.
The solution is to allow fivetran user and its group in /etc/ssh/sshd_config file and restart the ssh daemon.
Only then you would be able to connect.