Feed aggregator

Can I do it with PostgreSQL? – 17 – Identifying a blocking session

Yann Neuhaus - 9 hours 57 min ago

One single blocking session in a database can completely halt your application so identifying which session is blocking other sessions is a task you must be able to perform quickly. In Oracle you can query v$session for getting that information (blocking_session, final_blocking_session). Can you do the same in PostgreSQL? Yes, you definitely can, lets go.

As usual we’ll start by creating a test table:

postgres@pgbox:/home/postgres/ [PG10B] psql -X postgres
psql (10beta2 dbi services build)
Type "help" for help.

postgres=# create table t1 ( a int );
CREATE TABLE
postgres=# 

One way to force other sessions to wait is to start a new transaction, modify the table:

postgres=# begin;
BEGIN
postgres=# alter table t1 add column t2 text;
ALTER TABLE
postgres=#  

… and then try to insert data into the same table from another session:

postgres@pgbox:/home/postgres/ [PG10B] psql -X postgres
psql (10beta2 dbi services build)
Type "help" for help.

postgres=# insert into t1 (a) values (1);

The insert statement will hang/wait because the modification of the table is still ongoing (the transaction did neither commit nor rollback, remember that DDLs in PostgreSQL are transactional). Now that we have a blocking session how can we identify the session?

What “v$session” is in Oracle, pg_stat_activity is in PostgreSQL (Note: I am using PostgreSQL 10Beta2 here):

postgres=# \d pg_stat_activity 
                      View "pg_catalog.pg_stat_activity"
      Column      |           Type           | Collation | Nullable | Default 
------------------+--------------------------+-----------+----------+---------
 datid            | oid                      |           |          | 
 datname          | name                     |           |          | 
 pid              | integer                  |           |          | 
 usesysid         | oid                      |           |          | 
 usename          | name                     |           |          | 
 application_name | text                     |           |          | 
 client_addr      | inet                     |           |          | 
 client_hostname  | text                     |           |          | 
 client_port      | integer                  |           |          | 
 backend_start    | timestamp with time zone |           |          | 
 xact_start       | timestamp with time zone |           |          | 
 query_start      | timestamp with time zone |           |          | 
 state_change     | timestamp with time zone |           |          | 
 wait_event_type  | text                     |           |          | 
 wait_event       | text                     |           |          | 
 state            | text                     |           |          | 
 backend_xid      | xid                      |           |          | 
 backend_xmin     | xid                      |           |          | 
 query            | text                     |           |          | 
 backend_type     | text         

There is no column which identifies a blocking session but there are other interesting columns:

postgres=# select datname,pid,usename,wait_event_type,wait_event,state,query from pg_stat_activity where backend_type = 'client backend' and pid != pg_backend_pid();
 datname  | pid  | usename  | wait_event_type | wait_event |        state        |               query                
----------+------+----------+-----------------+------------+---------------------+------------------------------------
 postgres | 2572 | postgres | Client          | ClientRead | idle in transaction | alter table t1 add column t2 text;
 postgres | 2992 | postgres | Lock            | relation   | active              | insert into t1 (a) values (1);
(2 rows)

This shows only client connections (excluding all the backend connections) and does not show the current session. In this case it is easy to identify the session which is blocking because we only have two sessions. When you have hundreds of sessions it becomes more tricky to identify the session which is blocking by looking at pg_stat_activity.

When you want to know which locks are currently being held/granted in PostgreSQL you can query pg_locks:

postgres=# \d pg_locks
                   View "pg_catalog.pg_locks"
       Column       |   Type   | Collation | Nullable | Default 
--------------------+----------+-----------+----------+---------
 locktype           | text     |           |          | 
 database           | oid      |           |          | 
 relation           | oid      |           |          | 
 page               | integer  |           |          | 
 tuple              | smallint |           |          | 
 virtualxid         | text     |           |          | 
 transactionid      | xid      |           |          | 
 classid            | oid      |           |          | 
 objid              | oid      |           |          | 
 objsubid           | smallint |           |          | 
 virtualtransaction | text     |           |          | 
 pid                | integer  |           |          | 
 mode               | text     |           |          | 
 granted            | boolean  |           |          | 
 fastpath           | boolean  |           |          | 

What can we see here:

postgres=# select locktype,database,relation,pid,mode,granted from pg_locks where pid != pg_backend_pid();
   locktype    | database | relation | pid  |        mode         | granted 
---------------+----------+----------+------+---------------------+---------
 virtualxid    |          |          | 2992 | ExclusiveLock       | t
 virtualxid    |          |          | 2572 | ExclusiveLock       | t
 relation      |    13212 |    24576 | 2992 | RowExclusiveLock    | f
 relation      |    13212 |    24581 | 2572 | AccessExclusiveLock | t
 transactionid |          |          | 2572 | ExclusiveLock       | t
 relation      |    13212 |    24579 | 2572 | ShareLock           | t
 relation      |    13212 |    24576 | 2572 | AccessExclusiveLock | t
(7 rows)

There is one lock for session 2992 which is not granted and that is the session which currently is trying to insert a row in the table (see above). We can get more information by joining pg_locks with pg_database and pg_class taking the pids from above:

select b.locktype,d.datname,c.relname,b.pid,b.mode 
  from pg_locks b 
     , pg_database d
     , pg_class c
 where b.pid in (2572,2992)
   and b.database = d.oid
   and b.relation = c.oid;

 locktype | datname  | relname | pid  |        mode         
----------+----------+---------+------+---------------------
 relation | postgres | t1      | 2992 | RowExclusiveLock
 relation | postgres | t1      | 2572 | AccessExclusiveLock
(2 rows)

Does that help us beside that we now know that both sessions want to do some stuff against the t1 table? Not really. So how can we then identify a blocking session? Easy, use the pg_blocking_pids system information function passing in the session which is blocked:

postgres=# select pg_blocking_pids(2992);
 pg_blocking_pids 
------------------
 {2572}
(1 row)

This gives you a list of sessions which are blocking. Can we kill it? Yes, of course, PostgreSQL comes with a rich set of system administration functions:

postgres=# select pg_terminate_backend(2572);
 pg_terminate_backend 
----------------------
 t

… and the insert succeeds. Hope this helps …

PS: There is a great page on the PostgreSQL Wiki about locks.

 

Cet article Can I do it with PostgreSQL? – 17 – Identifying a blocking session est apparu en premier sur Blog dbi services.

Recommended join style

Tom Kyte - 10 hours 40 min ago
Dear Oracle Masters, here is a poor disciple looking for guidance, I know the way to reach the true knowledge does not have an end, but I would appreciate few words to make my journey more safe, especially for my fellow travelers. Here is my q...
Categories: DBA Blogs

Different behaviours in implicit conversion on 11g and 12c NVARCHAR type

Tom Kyte - 10 hours 40 min ago
Hi, We have encountered the below scenario when used with 11g and 12c respectively, -- test data <code>create table t1 (tid nvarchar2(10) primary key); insert into t1 values ('123'); insert into t1 values ('123-00'); insert into t1 value...
Categories: DBA Blogs

Subpartitioning on IOT tables

Tom Kyte - 10 hours 40 min ago
Hello ! I've asked on many places, but no definite answer. We are using IOT tables as counters storage medium. They are perfect for that purpose as they consists of only PK + VALUE, or PK + VALUE1, VALUE2, VALUE3 up to some VALUEn n being s...
Categories: DBA Blogs

Interdependent Foreign Key Constraints

Tom Kyte - 10 hours 40 min ago
SQL> CREATE TABLE A(NO1 NUMBER(2) PRIMARY KEY,NO2 NUMBER(2)); Table created. SQL> CREATE TABLE B(NO1 NUMBER(2) PRIMARY KEY,NO2 NUMBER(2)); Table created. SQL> ALTER TABLE A ADD CONSTRAINT AA FOREIGN KEY(NO2) REFERENCES B(NO1); Table al...
Categories: DBA Blogs

Why NLS_UPPER maps lowercase 'i' to '?' when NLS_SORT is set to 'xturkish'?

Tom Kyte - 10 hours 40 min ago
Hello, Can you please explain why NLS_UPPER in following script maps lowercase 'i' to '?' whereas all other alphabets are mapped correctly to corresponding uppercase alphabets. <code>SQL> ALTER SESSION SET NLS_SORT="xturkish"; Session altere...
Categories: DBA Blogs

Why Do We Have Commit/Rollback on Explain Plan

Tom Kyte - 10 hours 40 min ago
Hi Team, I ran below query to see explain plan for my view. but after completion of query execution when i was trying to disconnect , it is asking about connection<conection_name> has uncommited. 1:- commit changes 2:- rollback changes 3;-...
Categories: DBA Blogs

Redo OP Codes:

Jonathan Lewis - 11 hours 28 min ago

This posting was prompted by a tweet from Kamil Stawiarski in response to a question about how he’d discovered the meaning of Redo Op Codes 5.1 and 11.6 – and credited me and Julian Dyke with “the hardest part”.

Over the years I’ve accumulated (from Julian Dyke, or odd MoS notes, etc.) and let dribble out the occasional interpretation of a few op codes – typically in response to a question on the OTN database forum or the Oracle-L listserver, and sometimes as a throwaway comment in a blog post, but I’ve never published the full set of codes that I’ve acquired (or guessed) to date.

It’s been some time since I’ve looked closely at a redo stream, and there are many features of Oracle that I’ve never had to look at at the level of the redo so there are plenty of gaps in the list – and maybe a few people will use the comments to help fill the gaps.

It’s possible that I may be able to add more op codes over the new days – I know that somewhere I have some op codes relating to space management, and a batch relating to LOB handling, but it looks like I forgot to add them to the master list – so here’s what I can offer so far:


1	Transaction Control

2	Transaction read

3	Transaction update

4	Block cleanout
		4.1	Block cleanout record
		4.2	Physical cleanout
		4.3	Single array change
		4.4	Multiple array changes
		4.5	Format block
		4.6	ktbcc redo -  Commit Time Block Cleanout Change (?RAC, ?recursive, ?SYS objects)

5	Transaction undo management
		5.1	Update undo block
		5.2	Get undo header
		5.3	Rollout a transaction begin
		5.4	On a rollback or commit
		5.5	Create rollback segmenr
		5.6	On a rollback of an insert
		5.7	In the ktubl for 'dbms_transaction.local_transaction_id'
			(begin transaction) - also arrives for incoming distributed
			tx, no data change but TT slot acquired. Also for recursive
			transaction (e.g. truncate). txn start scn:  0xffff.ffffffff
		5.8	Mark transaction as dead
		5.9	Rollback extension of rollback seg
		5.10	Rollback segment header change for extension of rollback seg
		5.11	Mark undo as applied during rollback
		5.19	Transaction audit record - first
		5.20	Transaction audit record - subsequent
		5.23	ktudbr redo: disable block level recovery (reports XID)
		5.24	ktfbhundo - File Space Header Undo

6	Control file

10	Index
		10.1	SQL load index block
		10.2	Insert Leaf Row
		10.3	Purge Leaf Row
		10.4	Delete Leaf Row
		10.5	Restore Leaf during rollback
		10.6	(kdxlok) Lock block (pre-split?)
		10.7	(kdxulo) unlock block during undo
		10.8	(kdxlne) initialize leaf block being split
		10.9	(kdxair) apply XAT do to ITL 1	-- related to leaf block split 
		10.10	Set leaf block next pointer
		10.11	(kdxlpr) (UNDO) set kdxleprv (previous pointer)
		10.12 	Initialize root block after split
		10.13	index redo (kdxlem): (REDO) make leaf block empty,
		10.14	Restore block before image
		10.15	(kdxbin) Insert branch block row	
		10.16	Purge branch row
		10.17	Initialize new branch block
		10.18	Update key data in row -- index redo (kdxlup): update keydata
		10.19	Clear split flag
		10.20	Set split flag
		10.21	Undo branch operation
		10.22	Undo leaf operation
		10.23	restore block to tree
		10.24	Shrink ITL
		10.25	format root block
		10.26	format root block (undo)
		10.27	format root block (redo)
		10.28	Migrating block (undo)
		10.29	Migrating block (redo)
		10.30	Update nonkey value
		10.31	index root block redo (kdxdlr):  create/load index
		10.34 	make branch block empty
		10.35	index redo (kdxlcnu): update nonkey
		10.37	undo index change (kdxIndexlogicalNonkeyUpdate) -- bitmap index
		10.38	index change (kdxIndexlogicalNonkeyUpdate) -- bitmap index
		10.39	index redo (kdxbur) :  branch block update range
		10.40	index redo (kdxbdu) :  branch block DBA update,

11	Table
		11.1  undo row operation 
		11.2  insert row  piece
		11.3  delete row piece 
		11.4  lock row piece
		11.5  update row piece
		11.6  overwrite row piece
		11.7  manipulate first column
		11.8  change forwarding address - migration
		11.9  change cluster key index
		11.10 Set Cluster key pointers
		11.11 Insert multiple rows
		11.12 Delete multiple rows
		11.13 toggle block header flags
		11.17 Update multiple rows
		11.19 Array update ?
		11.20 SHK (mark as shrunk?)
		11.24 HCC update rowid map ?

12	Cluster

13	Segment management
		13.1	ktsfm redo: -- allocate space ??
		13.5	KTSFRBFMT (block format) redo
		13.6	(block link modify) (? index )  (options: lock clear, lock set)
		13.7	KTSFRGRP (fgb/shdr modify freelist) redo: (options unlink block, move HWM)
		13.13	ktsbbu undo - undo operation on bitmap block
		13.14	ktsbbu undo - undo operation on bitmap block
		13.17	ktsphfredo - Format Pagetable Segment Header
		13.18	ktspffredo - Format Level1 Bitmap Block
		13.19	ktspsfredo - Format Level2 Bitmap Block
		13.21	ktspbfredo - Format Pagetable Datablock
		13.22	State change on level 1 bitmap block
		13.23	Undo on level 1 bitmap block
		13.24	Bitmap block (BMB) state change (level 2 ?)
		13.25	Undo on level 2 bitmap block 
		13.26	?? Level 3 bitmap block state change ??
		13.27	?? Level 3 bitmap block undo ??
		13.28	Update LHWM and HHWM on segment header
		13.29	Undo on segment header
		13.31	Segment shrink redo for L1 bitmap block
		13.32	Segment shrink redo for segment header block

14	Extent management
		14.1	ktecush redo: clear extent control lock
		14.2	ktelk redo - lock extent (map)
		14.3	Extent de-allocate
		14.4	kteop redo - redo operation on extent map
		14.5	kteopu undo - undo operation on extent map
		14.8	kteoputrn - undo operation for flush for truncate

15	Tablespace

16	Row cache

17	Recovery management
		17.1	End backup mode marker
		17.3	Crash Recovery at scn:  0x0000.02429111
		17.28	STANDBY METADATA CACHE INVALIDATION
	
18	Block image (hot backups)
		18.1	Block image
		18.3	Reuse redo entry 
				   (Range reuse: tsn=1 base=8388753 nblks=8)
				or (Object reuse: tsn=2 objd=76515)

19	Direct loader
		19.1	Direct load block record
		19.2	Nologging invalidate block range
			Direct Loader invalidate block range redo entry

20	Compatibility segment

21	LOB segment 
		21.1	kdlop (Long Feild) redo:  [sic]
				(insert basicfile clob)

22	Locally managed tablespace
		22.2	ktfbhredo - File Space Header Redo:
		22.3	ktfbhundo - File Space Header Undo:
		22.5	ktfbbredo - File BitMap Block Redo:
		22.16	File Property Map Block (FPM)

23	Block writes
		23.1	Block written record
		23.2	Block read record (BRR) -- reference in Doc ID: 12423475.8

24	DDL statements
		24.1	DDL
		24.2	Direct load block end mark
		24.4	?? Media recovery marker
		24.10	??
		24.11	??

(E & O.E) – you’ll notice that some of the descriptions include question marks – those are guesses – and some are little more than the raw text extracted from a redo change vector with no interpretation of what they might mean.

 


Industrial IoT Strategy, The Transference of Risk by using a Digital Twin

Amis Blog - 21 hours 5 min ago

The Internet of Things (IoT) is all about getting in-depth insight about your customers. It is the inter-networking of physical devices, vehicles (also referred to as “connected devices” and “smart devices”), buildings, and other items embedded with electronics, software, sensors, actuators, and network connectivity which enable these objects to collect and exchange data.

For me, IoT is an extension of data integration and big data. The past decade I have worked in the integration field and adding smart devices to these systems makes it even more interesting. Connecting the real world with the digital one creates a huge potential for valuable digital services on top of the physical world. This article contains our vision and guidance for a strategy for The Internet of Things based on literature and our own experience. 

Drivers for business.

Everybody is talking about The Internet Of Things. This is going to become a billion dollar business in the near future. IoT has become a blanket term for smart, connected devices. Technology is giving these devices the ability to sense/act for themselves, cause an effect on the environment and be controlled by us.  Especially in the industrial world the application of smart sensors has the potential to change the landscape of current supplier of large scale industrial solutions. 

This is the perfect storm

For decades we have had devices with sensors and connectivity,  but these devices never reached the market potential they currently have until now. IoT is slowly becoming a mainstream technology. Only 2 years ago there were technical limitations in processing power, storage, connectivity, and platform accessibility hindering the growth of the usage of IoT devices.

Now we see a perfect storm: The advances in cloud computing, big data storage, an abundance of fast internet access, machine learning, and smart sensors come together. The past economic crisis has made businesses start focusing more on lean manufacturing, measuring and real-time feedback. And finally, our addiction to social media and direct interaction makes us accustomed to instant feedback. We demand real time process improvement and in-depth, highly personalized services. This can only be achieved by probing deep into data about the behavior of consumers.

Digital Transformation changes our economy.  

Smart devices are a driver for efficiency. On one hand, we can save power usage – by switching off unused machines for example – and boost effective usage of machines by optimizing their utilization. For example: have cleaning robots to visit rooms with a lot of traffic more often, instead of the same schedule for all rooms.  Intensive data gathering offers the possibility to optimize our processes and apply effective usage of machines and resources.  These solutions are aimed at saving money. Your customers expect this process data as an inclusive service on top of the product they buy from you. In practice: look at the Nest thermostat; the dashboard and data are perceived as part of the device. Nobody is going to pay extra for the Nest dashboard.  

Create value using a digital twin of your customer

You can make a real difference with IoT when you consider the long term strategic goals of your company. Smart devices make it possible to acquire extensive data of your customer.  This information is very valuable, especially when you combine individual sensor data of each customer to a complete digital representation of the customer (also called digital twin). This is very valuable for both B2B and B2C businesses.  Having a digital twin of your customer helps you know exactly what your customer needs and what makes them successful.  You can create additional services and better a user experience with the data you acquire.  Your customers are willing to pay for an add-on when you are able to convert their data into valuable content and actions. This is how you create more revenue.

IoT is all about transference of risk and responsibility

I predict IoT will transform the economy. With IoT, you are able to influence your customer and their buying habits. You are able to measure the status and quality of a heating installation, car engine or security system. You are able to influence the operation of these machines and warn your customer up front about possible outages due to wear and usage. Next logical step for your customer is to transfer the responsibility for these machines to you as a supplier.  This has huge consequences for the risk profile of your company and the possible liabilities connected to it. Having an extensive sensor network and operational digital twin of the customer makes it possible to assess and control this risk. You can implement predictive maintenance and reduce the risk of an outage. Since you can predict possible malfunction since you have a vast amount of data and trained algorithms to predict the future state of the machines and your customers. Customers are prepared to pay an insurance fee if you can guarantee the operational state and business continuity.

How to create a profitable your IoT strategy?

The first step is to determine what kind of company you want to be in the IoT realm. According to  Frank Burkitt and Brian Solis There are 3 types of companies building IoT services:

  • Enablers
    These are the companies that develop and implement IoT technology; they deliver pure IoT solutions. Ranging from hardware to all kinds of cloud systems. They have no industry focus and deliver generic IoT solutions. The purpose of these companies is to process as high as the possible volume at a low price. The enablers will focus on delivering endpoint networks and cloud infrastructure. This market will be dominated by a small number of global players who deliver devices, sensors, and suitable cloud infrastructure.
  • Engagers
    These are the companies who design, create, integrate, and deliver IoT services to customers. The purpose of these companies it to deliver customer intimacy by adding close interaction with the end users. Aiming their strategy on customer intimacy via IoT. Usually via one specific industry or product stack.  The engagers will focus on hubs and market-facing solutions like dashboards and historical data. This market will contain traditional software companies able to offer dashboards on top of existing systems and connecting IoT devices.
  • Enhancers
    These are the companies that deliver their own value-added services on top of services delivered by the Engagers. The services of the Engagers are unique to IoT and add a lot of value to their end user. Their goal is to provide a richer end-user engagement and surprise and delight the customer by offering them new services using their data and enhancing this with your experience and third party sources. This market will contain innovative software companies able to bridge the gap between IoT, Big Data and Machine Learning. These companies need to have excellent technical and creative to offer new and disruptive solutions.
How to be successful in the IoT World?
  1. Decide the type of company you want to be: Enabler, Engager or Enhancer? Make sure if you are an enabler you need to offer a distinctive difference compared to existing platforms and devices.
  2. Identify your target market as you need to specialize in making a significant difference.
  3. Hire a designer and a business developer if you aren’t any of these.
  4. Develop using building blocks.
    Enhance existing products and services. Be very selective about what you want to offer. Do not invent the wheel yourself and use existing products and services and build on the things that are already being offered as SAAS solutions.
  5. Create additional value
    Enhance existing services with insight and algorithm. Design your service in such a way that you create additional value in your network. Create new business models and partner with companies outside your industry.
  6. Invest in your company
    Train your employees and build relationships with other IoT companies.
  7. Experiment with new ideas, create an innovation lab and link to companies outside your comfort zone to add them to your service

You are welcome to contact us if you want to know more about adding value to your products and services using IoT.
We can help you make your products and services smart at scale.  Visit our IoT services page

The post Industrial IoT Strategy, The Transference of Risk by using a Digital Twin appeared first on AMIS Oracle and Java Blog.

E-Business Suite Release Roadmap Updated

Steven Chan - 21 hours 46 min ago

Here's more information on the Oracle E-Business Suite Roadmap update announced last month:

 

Categories: APPS Blogs

How to change sequence.nextval increase amount

Tom Kyte - Mon, 2017-07-24 18:46
i have table a <code>create table a (sno number(10)); create sequence test start with 1 increment by 1 nocycle nocache; insert into a (sno) values(test.nextval);</code> I have executed the above insert statement 1000 times. now ...
Categories: DBA Blogs

Extracting very long string from JSON to CLOB

Tom Kyte - Mon, 2017-07-24 18:46
Hi, Tom. I'm trying to extract a very long string into clob from json_object_t and got some weird database behaviour (12.2c) with json_object_t.get_clob(key) method. Here is a sample code: <code>DECLARE l_data CLOB := '{"text": "very long string...
Categories: DBA Blogs

Add Unique column using other column of the table

Tom Kyte - Mon, 2017-07-24 18:46
We have a table with 100 columns and number of records are around 1.3 million in it. Also having around 40 indexes created on the table. Subset of the table is as below <code>create table t ( username varchar2(100), DOJ date, recid varch...
Categories: DBA Blogs

User logged in through proxy only sees default role

Tom Kyte - Mon, 2017-07-24 18:46
Hi Tom, I have a user with two roles, one role without password that is set by default, and another role that is set with a password. This user has been granted to connect through a proxy user with flag of "PROXY MAY ACTIVATE ALL CLIENT ROLES". ...
Categories: DBA Blogs

ORACLE COMMAND issues

Tom Kyte - Mon, 2017-07-24 18:46
Hi Tom, I am facing this issue with COPY command. I am trying to copy data from remote database to local database and the table has more than 90 fields and again i am using TO_CHAR() in SELECT statements. I am getting the below error.. Could you p...
Categories: DBA Blogs

Python cx_Oracle 6.0 RC 2 is on PyPI

Christopher Jones - Mon, 2017-07-24 18:16

Python cx_Oracle is the Python interface for Oracle Database

Anthony Tuininga has released the second (and probably final) Release Candidate of Python cx_Oracle 6.0 on PyPI. It's now full steam ahead towards the production release, so keep on testing. Issues can be reported on GitHub or the mailing list.

To take look, use the '--pre' option to pip to install this pre-release:

python -m pip install cx_Oracle --upgrade --pre

You will also need Oracle client libraries, such as from the free Oracle Instant Client.

I want to highlight a few of changes, focusing, as readers of my blog will know I favor, on usability.

  • This release picks up the latest ODPI-C Oracle DB abstraction layer, which provides some nice fixes. In particular one fix resolved a confusing Windows system message about a 'UnicodeDecodeError' displaying when cx_Oracle was imported. Now the actual Windows error message is displayed, allowing you to see what root problem is.

  • The installation notes have been tidied up and made into a new Installation Chapter of the documentation, complete with troubleshooting tips.

  • Some more introductory samples have been added, and the sample and test schema creation scripts improved. The scripts now reference a common file to set credentials, making it easier to play with them without needing to edit every single one.

The full cx_Oracle release notes are here. Let us know what you find.

The Oracle Database Cloud Development Story

OTN TechBlog - Mon, 2017-07-24 17:42

Are you a software developer looking for the best cloud database for application development?

Oracle Database Exadata Express Cloud Service is the ideal entry-level service for running Oracle Database in Oracle Cloud.

Here's why:

  • It delivers an affordable and fully managed Oracle Database 12c Release 2 experience, with enterprise options, running on Oracle Exadata.
  • It is a great fit for small and medium-sized production databases as well as development, testing and evaluation environments.
  • For developers, Exadata Express provides easy access to advanced development features of Oracle Database, enabling you to rapidly create modern data-driven applications.

 

Exadata Express in Oracle Cloud delivers an easy, affordable and feature-rich enterprise database experience. You do not need to worry about network or storage configuration, patching, upgrade or other DBA tasks. These activities are managed for you by Oracle, so no customer DBA is required. Exadata Express gives you the same compatible Oracle Database Enterprise Edition that runs on-premises and in other Oracle Database Cloud Services–provisioned for you within minutes. It uses one of Oracle’s most advanced configurations, combining shared Oracle Exadata engineered systems for highest performance and availability with Oracle Multitenant Pluggable Database (PDB) containerization technology for security isolation, resource management and lowest cost. With support for up to 50 GB of database storage, Exadata Express is an ideal entry-level service for small and medium sized data bases used in production, development, testing and evaluation environments.

 

OVERVIEW OF EXADATA EXPRESS

 

Service Subscription Price Storage Maximum Data Transfer Maximum X20 $175 / month 20 GB 120 GB / month X50 $750 / month 50 GB 300 GB / month X50IM* $950 / month 50 GB 300 GB / month

*Provides up to 5 GB additional RAM for use with Oracle Database In-Memory Column Store

Oracle Database 12c Release 2 gives software developers get a unifying database that includes support for new data management and access models. Native support for RESTful and a schema-less documents & collections interfaces in addition to supporting standard SQL. The database can natively store JSON, XML and relational data all in a single environment.

In addition, developers get client drivers for all their favorite application environments and programming languages including Java, .NET, Python, Node.js, PHP, C/C++, Ruby and more. Developers can take advantage of free integrated development environments from Oracle for creating and debugging their applications including SQL Developer, Data Modeler and JDeveloper.

With Oracle Database 12c Release 2, developers also get pre-configured Oracle Application Express 5 (APEX). This is a simple declarative environment for rapid development of data-driven web apps using only your web browser. No additional tools are required. APEX version 5 includes all new packaged controls, updated themes, a gallery of productivity applications, and other enhancements that make apps look beautiful across desktop and mobile browsers.

 

 

Oracle Database Exadata Express Cloud Service

Apex 5 Application Builder in Exadata Express

 

But more than advanced features and native support for popular languages, Oracle has brought out several new PaaS offerings Just for Developers:

Oracle’s Application Container Cloud Service includes support for PHP, along with Node.js and Java.

Applications composed from multiple PaaS services can be created, scaled and managed as a single unit with the new Oracle Cloud Stack Manager.

Java EE apps are migrated to the Cloud automatically using Oracle AppToCloud while adding capabilities like active standby and increasing cluster size as the application is moved to the Cloud.

Anyone can author complete applications from a browser without coding skills, using the Oracle Application Builder Cloud Service low-code development platform to extend services with pre-populated Oracle Software-as-a-Service APIs or custom services from a common REST API catalog.

Oracle Developer Cloud Service also makes collaboration between developers easier by integrating with popular collaboration tools like Slack, Hipchat, Hashicorp's Packer and Terraform and including agile management features for managing sprints, tasks and backlogs.

Oracle Mobile Cloud Service now provides actionable insights and engagement across multi-channels and micro locations to improve the customer experience. It also provides an intelligent and contextual Chatbots experience across multiple messaging channels like Facebook Messenger, Slack, Kik and others.

Ready to code? Get a Free Trial of the Oracle Database Exadata Express Cloud Service.

Ciao for now,

LKR

Ingress: Level 16 reached...

Dietrich Schroff - Mon, 2017-07-24 17:00
After long time playing google/niantic's ingress (the predecessor of pokemon go - all the arenas are portals in ingress, most of them created by ingress players) i reached the last level:



And the usual welcome package:


I am wondering, wether i should continue playing or quit the game for now or ever.
Perhaps reading this discussion (What do lvl 16 players play for?) may help me.

Management Mantra for Startups - Waste Not, Vacate Not

Abhinav Agarwal - Mon, 2017-07-24 15:28
Image credit: pexels.com
Waste Not, Vacate Not.
W
hen Jeff Bezos, founder and CEO of Amazon, started out Amazon, he, along with Shel Kaphan, programmer and a founding employee, used sixty-dollar doors from Home Depot as desks. It was the demand of frugality. More than a decade later, when Amazon was a multi-billion dollar behemoth, conference-room tables were still made of door-desks. It reflected its CEO's adamant belief in "frugality." A leadership principle at Amazon states that "Frugality breeds resourcefulness, self-sufficiency and invention." In case you have been living in a world without news, you would know that Amazon's market capitalization, as of July 23rd, was a shade under US$500 billion, its trailing twelve-month revenues in excess of US$140 billion, and has been growing at an annual rate of more than 20%.

All this about Amazon's culture of frugality are captured in Brad Stone's brilliant book on the company, "The Everything Store: Jeff Bezos and the Age of Amazon."
"Bezos met me in an eighth-floor conference room and we sat down at a large table made of half a dozen door-desks, the same kind of blond wood that Bezos used twenty years ago when he was building Amazon from scratch in his garage. The door-desks are often held up as a symbol of the company’s enduring frugality."
...
They set up shop in the converted garage of Bezos’s house, an enclosed space without insulation and with a large, black potbellied stove at its center. Bezos built the first two desks out of sixty-dollar blond-wood doors from Home Depot, an endeavor that later carried almost biblical significance at Amazon, like Noah building the ark.
...
"Door-Desk award, given to an employee who came up with “a well-built idea that helps us to deliver lower prices to customers”—the prize was a door-desk ornament. Bezos was once again looking for ways to reinforce his values within the company."
...
"Conference-room tables are a collection of blond-wood door-desks shoved together side by side. The vending machines take credit cards, and food in the company cafeterias is not subsidized. When a new hire joins the company, he gets a backpack with a power adapter, a laptop dock, and some orientation materials. When someone resigns, he is asked to hand in all that equipment—including the backpack." [The Everything Store, by Brad Stone]So what does this have to do with Flipkart?Flipkart has been in business for (almost) ten years now (it was founded in October 2007). It has raised more than $4 billion dollars from investors, the most recent round of funding closing in early 2017. The Indian e-commerce pioneer however has yet to make a single new paisa in profit. In its fiscal year ending March 31st, 2016, its losses doubled to ₹2,306 crores (approximately US$350 million). Keep that in mind as you go through this post.

In October 2014, coming off the back of two funding rounds that saw it raise more than $1 billion from investors, came news that Flipkart had entered into an agreement to lease 3 million square feet of prime office space for an estimated annual rent of ₹300 crores (approximately US$48 million at the then exchange rates). This figure was cut down to 2 million sq ft by the time the deal was announced in May 2015. Even with the reduced commitment, it was, at the time, touted as the "single largest commitment of office space anywhere in the country."

In late 2015, several news sites, including the Economic Times, posted extensive photos of Flipkart's new office at the Cessna Business Park in Bengaluru. A cursory look at the office, as revealed by the photos, told a story of a no-expenses spared philosophy at work. Each floor had a "theme inspired by human greatness in various fields – science, sports, fashion, music". Hallways were designed to resemble running tracks, with the Olympic logo emblazoned prominently.





Images credit: Economic TimesBy 2016, Flipkart's numerous missteps had only compounded its woes in the face of an unrelenting foe in the form of a rampaging Amazon. In November 2016, therefore, the news came as no surprise that that Flipkart had decided to forego almost half of the office space it had signed up for a year ago. Instead of the two-million square feet, the company wanted no more than 1.2 million sq-ft. In addition, it had negotiated lowered fitment costs from ₹2400 to ₹1500 per sq-ft.

Juxtapositions are meant to contrast. They can also be cruel. 
Like when it was reported in Jan 2017 that Amazon had leased more than one million sq ft of office space in India in 2016. That Amazon had leased more office space in 2015 than it had in all its previous years of presence in India. That it was reported in June 2017 that Amazon had leased 600,000 sq ft of office space in Hyderabad.

On top of this juxtaposition, let's add a dash of irony. Both of Flipkart's founders, Sachin Bansal and Binny Bansal, had worked at Amazon before leaving to start Flipkart. Jeff Bezos' mantra of frugality had either never been learned, or had perhaps been buried under the billions of investor money.

Since we are talking about contrasts, let me end with one more. In September 2016, it was reported that Flipkart was planning to cut its staff by 800, on top of 400 "performance-related" exits in July. In May 2016, it communicated to India's premier management institutes - Indian Institutes of Management at Ahmedabad, Bangalore, Lucknow, and the Faculty of Management Studies, Delhi - that it would defer the joining dates of students it had made job offers to by six months. In response, "The authorities at IIM-A have sent a strongly worded letter to Flipkart, marking other premier B-schools such as IIM-Bangalore, IIM-Lucknow and the Faculty of Management Studies, Delhi."

 What about Amazon? The company, in a press-release in January 2017, announced that it would "Create More Than 100,000 New, Full-Time, Full-Benefit Jobs across the U.S. over the Next 18 Months."

What's the takeaway? That companies need to beware the curse of the new headquarters? Or that founders need to focus on companies that can stand on their own feet? That CEOs need to focus on execution? That boards and investors cannot function as absentee landlords?

[I have written at length on this fascinating slugfest. When I read about and witnessed its mobile-only obsession I had called it a dangerous distraction, not to mention a revenue chimera and a privacy nightmare. I warned that Flipkart was making a mistake, a big mistake, in taking its eye off the ball in competing against Amazon, using a cricket analogy that should have been familiar to the Indian founders. I wrote about how hubris-driven million-dollar hires had resulted in billion dollar erosions in valuations. I wrote about what had become an ever-revolving door of executive exits at Flipkart. I wrote about brand management snafus at Flipkart.]






Images credit: YahooThis post first appeared in LinkedIn Pulse on July 23rd, 2017.

"The Everything Store: Jeff Bezos and the Age of Amazon" (USINKindle USKindle IN)




© 2017, Abhinav Agarwal. All rights reserved.

Oracle Compute Cloud – Uploading My Image – Part Two – Linux 7

Amis Blog - Mon, 2017-07-24 14:20

In this sequel of part one I will show how you can upload your own (Oracle) Linux 7 image in the IAAS Cloud of Oracle. This post will use the lessons learnt by using AWS which I described here.

The tools used are: VirtualBox, Oracle Linux 7, Oracle IAAS Documentation and lots of time.

With Oracle as Cloud provider it is possible to use the UEKR3 or UEKR4 kernels in your image that you prepare in VirtualBox. There is no need to temporarily disable the UEKR3 or UEKR4 repo’s in your installation. I reused the VirtualBox VM that I’d prepared for the previous blog: AWS – Build your own Oracle Linux 7 AMI in the Cloud.

The details:

The main part here is (again) making sure that the XEN blockfront en netfront drivers are installed in your initramfs. There are multiple ways of doing so. I prefer changing dracut.conf.

 # additional kernel modules to the default
 add_drivers+="xen-blkfront xen-netfront"

You could also use:

rpm -qa kernel | sed 's/^kernel-//'  | xargs -I {} dracut -f --add-drivers 'xen-blkfront xen-netfront' /boot/initramfs-{}.img {}

But it is easy to forget to check if you need to rebuild your initramfs after you have done a: “yum update”. I know, I have been there…

The nice part of the Oracle tutorial is that you can minimize the size you need to upload by using sparse copy etc. But on Windows or in Cygwin that doesn’t work. Nor on my iMac. Therefore I had to jump through some hoops by using an other VirtualBox Linux VM that could access the image file and make a sparse copy, create a tar file and copy it back to the host OS (Windows or OSX).

Then use the upload feature of Oracle Compute Cloud or Oracle Storage Cloud to be exact.

Tip: If you get errors that your password isn’t correct (like I did) you might not have set a replication policy. (See the Note at step 7 in the documentation link).

Now you can associate your image file, which you just uploaded, to an image. Use a Name and Description that you like:

2017-07-14 17_54_30-Oracle Compute Cloud Service - Images

Then Press “Ok” to have the image created, and you will see messages similar to these on your screen:

2017-07-14 17_54_40

2017-07-14 17_54_45-Oracle Compute Cloud Service - Images

I now have two images created in IAAS. One exactly the same as my AWS image source and one with a small but important change:

2017-07-14 17_55_16-Oracle Compute Cloud Service - Images

Now create an instance with the recently uploaded image:

2017-07-14 17_55_37-Oracle Compute Cloud Service - Images

2017-07-14 17_56_34-Oracle Compute Cloud Service - Instance Creation

Choose the shape that you need:

2017-07-14 17_56_45-Oracle Compute Cloud Service - Instance Creation

Do not forget to associate your SSH Keys with the instance or you will not be able to logon to the instance:

2017-07-14 17_58_18-Oracle Compute Cloud Service - Instance Creation

I left the Network details default:
2017-07-14 18_01_33-Oracle Compute Cloud Service - Instance Creation

To change the storage details of the boot disk press the “hamburger menu” on the right (Just below “Boot Drive”):

2017-07-14 18_02_12-Oracle Compute Cloud Service - Instance Creation

I changed the boot disk from 11GB to 20GB so I can expand the filesystems if needed later on:

2017-07-14 18_03_21-Oracle Compute Cloud Service - Instance Creation

Review your input in the next step and press “Create” when you are satisfied:

2017-07-14 18_04_16-Oracle Compute Cloud Service - Instance Creation

You will see some messages passing by with the details of steps that have been put in motion:

2017-07-14 18_04_27-Oracle Compute Cloud Service - Instances (Instances)

If it all goes too fast you can press the little clock on the right side of you screen to get the ”Operations History”:

2017-07-14 18_04_35-Oracle Compute Cloud Service - Instances (Instances)

On the “Orchestrations” tab you can follow the status of the instance creation steps:

2017-07-14 18_06_45-Oracle Compute Cloud Service - Orchestrations

Once they have the status ready you will find a running instance on the instances tab:

2017-07-14 18_09_21-Oracle Compute Cloud Service - Instances (Instances)

Then you can connect to the instance and do with it whatever you want. In the GUI you can use the “hamburger” menu on the right to view the details of the instance, and for instance stop it:

2017-07-14 18_14_22-Oracle Compute Cloud Service - Instance Details (Overview)

Sometimes I got the error below, but found that waiting a few minutes before repeating the action it sequentially succeeded:

2017-07-17 18_01_32-

A nice feature of the Oracle Cloud is that you can capture screenshots of the console output, just as if you were looking at a monitor:

2017-07-17 18_46_08-Oracle Compute Cloud Service - Instance Details (Screen Captures)

And to view the Console Log (albeit truncated to a certain size) if you added the highlighted text to GRUB_CMDLINE_LINUX in /etc/default/grub:

[ec2-user@d3c0d7 ~]$ cat /etc/default/grub 
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=ol/root rd.lvm.lv=ol/swap rhgb quiet net.ifnames=0 console=ttyS0"
GRUB_DISABLE_RECOVERY="true"

If you didn’t you will probably see something like:

2017-07-17 18_46_28-Oracle Compute Cloud Service - Instance Details (Logs)

If you did you will see something like:

2017-07-17 19_01_38-Oracle Compute Cloud Service - Instance Details (Logs)

I hope this helps building your own Linux 7 Cloud Images.

The post Oracle Compute Cloud – Uploading My Image – Part Two – Linux 7 appeared first on AMIS Oracle and Java Blog.

Pages

Subscribe to Oracle FAQ aggregator