Feed aggregator

AWS re:invent 2018 – Day 2

Yann Neuhaus - Wed, 2018-11-28 14:43

Second day in Las Vegas for the AWS re:Invent conference. It was time to travel a little bit around the campus to attend some sessions at the Aria. The planning was still around databases and a bit AWS knowledge.

The shuttle service worked perfectly in both directions with a reasonable time travel between the 2 hotels. But with such time travels, you can’t always be in all sessions you would like to see.

I started with a session about DevOps strategy where the speaker Ajit Zadgaonkar explained some rules to succeed in the DevOps strategy. Even if you start small, moving to DevOps is a movement that all the company should be aware of. It’s about teaching not only within the DevOps team but let know other teams and businesses about your work.

Then I saw 2 different interesting sessions about Aurora running on Amazon RDS. Aurora runs on the same platform than the other proposed engines (Oracle, SQL Server, PostgreSQL, mySQL and MariaDB). It means Aurora is fully managed by AWS.

The interesting part is that Aurora supports 2 different engines: MySQL or Postgres and in both cases, AWS claims that the performance is lot better in Aurora than in the community edition because it has been designed for the Cloud. One of the 2 session was a deep dive focusing on the Postgres part and the storage part of Aurora is totally different.

AWS Aurora Postgres storage

AWS is using a shared storage across a region (like Frankfurt) and “replicate” pages in 6 different locations. According to them, it provides great resilience/durability/availability. To prevent write performance bottleneck, write is valid once 4 out of the 6 blocs have been written. In addition, Aurora is kind of redo log based and doesn’t send full pages/blocs to the storage, reducing a lot the amount of written data. Below is a slides of a benchmark using pgbench.

Aurora Postgres benchmark

To continue my journey, I also went to basic sessions about AWS infrastructure itself and it’s interesting to note that they think in advance how to power their datacenters, 50% of the energy used by AWS datacenters comes from renewable sources like wind or solar.followed this session remotely thanks to overflow areas where you can attend a session currently on-going in another hotel. You get a video streaming of the session with the slides and you get a headset for the sound.

Invent overflow session

There is also 5 new regions planned in a near future including 2 new locations in Europe: Milan, Bahrein, Stockholm Hong Kong and Cape Town.

Even if there were already some announcements, on Wednesday morning we will have the keynote with Andy Jassy, CEO of AWS. I’m looking forward for this keynote.

Cet article AWS re:invent 2018 – Day 2 est apparu en premier sur Blog dbi services.

No more recovery.conf in PostgreSQL 12

Yann Neuhaus - Wed, 2018-11-28 13:41

Traditionally everything which is related to recovery in PostgreSQL goes to recovery.conf. This is not only true for recovery settings but also for turning an instance into a replica which follows a master. Some days ago this commit landed in the PostgreSQL git repository. What that effectively means is, that there will be no more recovery.conf starting with PostgreSQL 12. How does that work then? Lets do some tests.

Obviously you need the latest development version of PostgreSQL (if you are not sure on how to do that check here and here):

postgres@pgbox:/home/postgres/ [PGDEV] psql -X postgres
psql (12devel)
Type "help" for help.

postgres=# select version();
                                                  version                                                   
------------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Lets look at the replica case first. When you do a pg_basebackup you can tell it to write a recovery.conf file (at least you could tell that up to PostgreSQL 11). So what changed here:

postgres@pgbox:/home/postgres/ [PGDEV] pg_basebackup --help | grep -A 1 recovery
  -R, --write-recovery-conf
                         write configuration for replication

When you compare that to a version before 12 you’ll notice the difference in wording:

postgres@pgbox:/home/postgres/ [PGDEV] /u01/app/postgres/product/10/db_4/bin/pg_basebackup --help | grep -A 1 recovery
  -R, --write-recovery-conf
                         write recovery.conf for replication

The word “recovery.conf” is gone and it is a more general statement about replication configuration now. What does pg_baebackup do now in PostgreSQL 12 when we ask to write the configuration for recovery:

postgres@pgbox:/home/postgres/ [PGDEV] pg_basebackup -R -D /var/tmp/pg12s/

We do not have a recovery.conf file:

postgres@pgbox:/home/postgres/ [PGDEV] ls -la /var/tmp/pg12s/
total 64
drwxr-xr-x. 20 postgres postgres  4096 Nov 27 20:19 .
drwxrwxrwt.  6 root     root       256 Nov 27 20:19 ..
-rw-------.  1 postgres postgres   224 Nov 27 20:19 backup_label
drwx------.  5 postgres postgres    41 Nov 27 20:19 base
-rw-------.  1 postgres postgres    33 Nov 27 20:19 current_logfiles
drwx------.  2 postgres postgres  4096 Nov 27 20:19 global
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_commit_ts
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_dynshmem
-rw-------.  1 postgres postgres  4513 Nov 27 20:19 pg_hba.conf
-rw-------.  1 postgres postgres  1636 Nov 27 20:19 pg_ident.conf
drwxr-xr-x.  2 postgres postgres    32 Nov 27 20:19 pg_log
drwx------.  4 postgres postgres    68 Nov 27 20:19 pg_logical
drwx------.  4 postgres postgres    36 Nov 27 20:19 pg_multixact
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_notify
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_replslot
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_serial
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_snapshots
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_stat
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_stat_tmp
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_subtrans
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_tblspc
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_twophase
-rw-------.  1 postgres postgres     3 Nov 27 20:19 PG_VERSION
drwx------.  3 postgres postgres    60 Nov 27 20:19 pg_wal
drwx------.  2 postgres postgres    18 Nov 27 20:19 pg_xact
-rw-------.  1 postgres postgres   390 Nov 27 20:19 postgresql.auto.conf
-rw-------.  1 postgres postgres 26000 Nov 27 20:19 postgresql.conf
-rw-------.  1 postgres postgres     0 Nov 27 20:19 standby.signal

Replica related configuration is appended to postgresql.auto.conf:

postgres@pgbox:/home/postgres/ [PGDEV] cat /var/tmp/pg12s/postgresql.auto.conf 
# Do not edit this file manually!
# It will be overwritten by the ALTER SYSTEM command.
logging_collector = 'on'
log_truncate_on_rotation = 'on'
log_filename = 'postgresql-%a.log'
log_line_prefix = '%m - %l - %p - %h - %u@%d '
log_directory = 'pg_log'
primary_conninfo = 'user=postgres passfile=''/home/postgres/.pgpass'' port=5433 sslmode=prefer sslcompression=0 target_session_attrs=any'

But what about timeline and all the other settings? All these have been merged into the normal postgresql.[auto.]conf file as well:

postgres=# select name,setting from pg_settings where name like '%recovery%';
           name            | setting 
---------------------------+---------
 recovery_end_command      | 
 recovery_min_apply_delay  | 0
 recovery_target           | 
 recovery_target_action    | pause
 recovery_target_inclusive | on
 recovery_target_lsn       | 
 recovery_target_name      | 
 recovery_target_time      | 
 recovery_target_timeline  | 
 recovery_target_xid       | 
 trace_recovery_messages   | log
(11 rows)

So all the settings can now be set in one file. The remaining question is: How does the instance then know when it needs to go into recovery? Before PostgreSQL 12 the presence of the recovery.conf file told the instance to go into recovery. Now, that the file is gone there must be a new mechanism and that is the “standby.signal” file in case of a replica:

postgres@pgbox:/home/postgres/ [PGDEV] cat /var/tmp/pg12s/standby.signal 
postgres@pgbox:/home/postgres/ [PGDEV] 

That file is empty and just tells PostgreSQL to go into recovery and then process the recovery related parameters which are now in postgresql.[auto.]conf. The same is true when a recovery is requested: The signal file in that case is “recovery.signal”.

All in all that means there is one configuration file less to take care of and that is good. The question will be on how fast all the third party tools will catch up with that change.

Cet article No more recovery.conf in PostgreSQL 12 est apparu en premier sur Blog dbi services.

LinkedIn, and the GDPR age

Robert Baillie - Wed, 2018-11-28 13:34
I should start this post by saying I’m neither a lawyer, nor a GDPR expert.  Possibly both of those facts will become massively apparent in the text that follows. Also, I’m not a LinkedIn Premium user - so it’s possible I’m missing something obvious by not having access to it. But anyway, I’ve been thinking about how LinkedIn fits into a GDPR world, and it doesn’t something doesn’t seem quite right to me at the moment. LinkedIn are in the data business, and they’re very good at protecting that asset.  They tend to be (quite rightly) pro-active in stopping people from extracting data from their systems and pushing it into their own systems. As such, businesses (recruiters particularly) are encouraged to contact directly within LinkedIn, and they are offered tools to discover people and commence that communication. Unfortunately, this lack of syncing between LinkedIn and in-house systems can cause a big problem with GDPR. That is: What happens if someone says to a recruitment...

LinkedIn, and the GDPR age

Rob Baillie - Wed, 2018-11-28 13:34
I should start this post by saying I’m neither a lawyer, nor a GDPR expert.  Possibly both of those facts will become massively apparent in the text that follows.

Also, I’m not a LinkedIn Premium user - so it’s possible I’m missing something obvious by not having access to it.

But anyway, I’ve been thinking about how LinkedIn fits into a GDPR world, and it doesn’t something doesn’t seem quite right to me at the moment.

LinkedIn are in the data business, and they’re very good at protecting that asset.  They tend to be (quite rightly) pro-active in stopping people from extracting data from their systems and pushing it into their own systems.

As such, businesses (recruiters particularly) are encouraged to contact directly within LinkedIn, and they are offered tools to discover people and commence that communication.

Unfortunately, this lack of syncing between LinkedIn and in-house systems can cause a big problem with GDPR.

That is:
What happens if someone says to a recruitment organisation - “Please forget me, and do not contact me again”

In this situation, the organisation is obliged to ‘remove' them from their systems.

At some point in the future another recruiter from the same organisation then finds the person on LinkedIn, without reference to their own systems and messages them using LinkedIn.

What happens next?

By the letter of the law, the organisation may not have done anything wrong.
  • The person is no longer in the organisation’s system, they were found on LinkedIn.
  • The person was not sent an e-mail, or phoned, they were messaged within LinkedIn.
  • The person has consented to have their data held by LinkedIn for the expressed purpose of being contacted by potential recruiters via the platform.

With all this in mind, it may be interpreted that it’s fair game to contact anyone on LinkedIn, regardless of their expressed desire not to be contacted by a particular company.

However, whilst this may be within the definition of the law, it’s pretty clear it’s not in the spirit of the law.

Note - Again I’m not a GDPR expert, nor a lawyer, so can't say for certain that it IS within the definition of the law - nor am I asserting that it is - just that I can imagine that it might be interpreted that way by some people.

And this is where things get complicated for LinkedIn.  I can see a few outcomes of this, but two of them could be extremely worrying for the future on LinkedIn.

Scenario - LinkedIn Premium is seen as an extension of a subscribing organisation’s IT systems.

It could be argued that, whilst LinkedIn is in independent entity, when they provide services to another organisation, their systems then become part of the remit of that subscribing organisation.

I.E. within LinkedIn, any action by a user and the storage of data of that action falls solely within the responsibility of the employer of the user that performs that action.  LinkedIn are not responsible for the use of the data in any way.

On first glance, this looks ideal to LinkedIn - no responsibility!

However, that’s not true - if there’s ever a test case that proves this point, then suddenly LinkedIn becomes a big risk to any organisation that uses it.

Over the course of the last 2 years or so, every data holding organisation in the EU has looked carefully at their data retention and use policies and systems and done what they can to protect themselves - in may cases I’m sure they have changed suppliers and systems since the existing systems have not proven up to scratch in the light of GDPR legislation.

Up to now, I’m not sure that many people have scrutinised LinkedIn in the same way.

At the moment it might be argued that LinkedIn is not supplying the tools to subscribers to allow them to comply with the GDPR legislation.  For example, I’m not aware of any functionality that allows an organisation to state "I wish to completely forget this person, and ensure that I cannot connect, view data on or contact them without their expressed consent”.  If that’s a minimum requirement of any internal system, why would it not be a minimum requirement for LinkedIn?

It could be that once that test case comes, a lot of organisations will take a look at LinkedIn and decide it doesn’t stand up, and it’s no longer worth the risk.

Scenario - LinkedIn, as the data controller, is responsible for the contact made by any users within the system.

This is potentially even worse for LinkedIn.  Since LinkedIn hold the data about people, provide the tools for discovering those people, provide the tools for contacting people, and for relaying those messages, it may be argued that it is up to LinkedIn to provide the mechanism to allow Users to state that they do not wish to be visible to or contacted by a given organisation.

That is, whilst it is another user who is sending the message, it may be that a future test case could state that LinkedIn are responsible for keeping track of who has ‘forgotten’ who.

By not providing that mechanism, and allowing users on the system to make contact when the contact is not welcome and against the target’s wishes, it’s possible that LinkedIn could be argued as being responsible for the unwelcome contact and therefore misuse of data.

Summary

Today, it seems that LinkedIn is in a bit of limbo.

There may be a recognised way to use LinkedIn in the GDPR era - find someone, check in my system that I’m allowed to contact them, go back to LinkedIn and contact them - but in order for that to work it requires the due diligence of recruiters to ensure that the law isn’t broken.

Realistically, something will have to change, or that test case is coming; at some point, someone is going to get an email that is going to break the limbo.

When that happens, I wonder which way it will go..?

[Solved] Oracle E-Business Suite (R12) Cloning Issue: ERROR: Script timed out

Online Apps DBA - Wed, 2018-11-28 06:01

Bothered with Script Timed Out Error During Cloning and Want to Know How to Troubleshoot the Error? If yes, then visit: https://k21academy.com/appsdba38 and Consider our Latest blog  that covers: ✔High Level Steps of Cloning in EBS (R12) ✔Issues Encountered by Trainees while Configuring the Target System on application tier & much more… Bothered with Script […]

The post [Solved] Oracle E-Business Suite (R12) Cloning Issue: ERROR: Script timed out appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[Vlog] Role Of EBS Cloud Admin Tool in Provisioning Oracle EBS (R12) on Cloud

Online Apps DBA - Wed, 2018-11-28 03:54

Do You know EBS Cloud Admin Tool is the most important utilities/tools to deploy, manage and migrate Oracle E-Business Suite EBS (R12) on Cloud? Visit: http://k21academy.com/ebscloud11 which discuss about: 1) What is EBS Cloud Admin Tool? 2) What things EBS Cloud Admin Tool can do? 3) High-level steps to deploy EBS Cloud Admin Tool on Oracle […]

The post [Vlog] Role Of EBS Cloud Admin Tool in Provisioning Oracle EBS (R12) on Cloud appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Partitioning -- 10 : Virtual Column Based Partitioning

Hemant K Chitale - Wed, 2018-11-28 03:44
Oracle 11g supports specifying a Virtual Column as the Partition Key.

A Virtual Column is a column where the value is derived on the basis of an expression on other columns or sql/plsql functions.  The actual value is not stored in the block holding the row but is computed when the row is retrieved.

For example :

create table my_sales_table
(invoice_id number primary key,
invoice_date date,
sale_year number(4) generated always as ( to_number(to_char(invoice_date,'YYYY')) ) virtual,
customer_id number,
sale_value number
)
/

insert into my_sales_table
(invoice_id, invoice_date, customer_id, sale_value)
values
(1,sysdate,100,10200)
/

select invoice_id, invoice_date, sale_year
from my_sales_table
/

INVOICE_ID INVOICE_DATE SALE_YEAR
1 28-NOV-18 2018


The MY_SALES_TABLE can be List, Range or Hash Partitioned on SALE_YEAR.

drop table my_sales_table;

create table my_sales_table
(invoice_id number primary key,
invoice_date date,
sale_year number(4) generated always as ( to_number(to_char(invoice_date,'YYYY')) ) virtual,
customer_id number,
sale_value number
)
partition by list(sale_year)
(partition p_2018 values (2018),
partition p_2019 values (2019),
partition p_2020 values (2020)
)
/

insert into my_sales_table
(invoice_id, invoice_date, customer_id, sale_value)
values
(1,sysdate,100,10200)
/

select invoice_date, sale_year from my_sales_table partition (p_2018)
/

INVOICE_DATE SALE_YEAR
28-NOV-18 2018


Thus, the SALE_YEAR value is not actually stored on disk, yet each partition has rows based on the SALE_YEAR (derived) value.


Categories: DBA Blogs

How to limit the resources at pluggable level in database 12cR2

Tom Kyte - Wed, 2018-11-28 02:06
Some bad performance pdbs may result in disaster for the whole db machine, in order to avoid this situation we need to limit the pdb resource usage! There's tens of pdbs in our 12cR2, each one requires DIFFERENT resources, we have to limit them with...
Categories: DBA Blogs

oracle 11g place limits on memory to session/user

Tom Kyte - Wed, 2018-11-28 02:06
Hello ! Here is my question : Is it possible to allow a minmum amount of memory to a specific session or a user on Oracle 11g? If it is, could anyone, please, explain it to me ? :) My fundamental need is as follow : I have two applications th...
Categories: DBA Blogs

expressions on pivot columns

Tom Kyte - Wed, 2018-11-28 02:06
On LiveSQL there is a tutorial, 'Converting Rows to Columns and Back Again: Databases for Developers' https://livesql.oracle.com/apex/livesql/file/tutorial_GNZ3LQPJ0K6RTD1NEEPNRQT0R.html It guides me through on how to pivot rows into columns an...
Categories: DBA Blogs

ORA-12154: TNS:could not resolve the connect identifier specified for ODBC

Tom Kyte - Wed, 2018-11-28 02:06
Greetings, I received the following error while trying to connect to Oracle instantclient 10.1 in MS Access 2016: ORA-12154: TNS:could not resolve the connect identifier specified [#12154) [Microsoft][ODBC Driver Manager] SQLSetConnectAttr fail...
Categories: DBA Blogs

ORA-12560 ERROR - do I have all the services I need ?

Tom Kyte - Wed, 2018-11-28 02:06
Hello, I am using a Macbook and I downloaded the Parallels virtual machine to get Windows 10 in order to download SQL Plus, I have done all of the minimum requirements and installed SQL Plus, but when it comes to entering my username and password...
Categories: DBA Blogs

Database Link across versions

Tom Kyte - Wed, 2018-11-28 02:06
Can a database link created on an Oracle 12 database access tables hosted on an Oracle 11 database? We have an existing application Oracle Argus that we are upgrading to Argus 8.1.2.1 which will run on Oracle 12 Exadata server. We have an existing...
Categories: DBA Blogs

Insert datetime values from .NET apps

Tom Kyte - Wed, 2018-11-28 02:06
Hello, Team. A .NET app is being developed by our team. We are using Oracle database 18.3.0. In order to insert date and time in a date column developers use the following code: <b>TO_DATE(TO_CHAR(:DATE_COLUMN,'DD/MM/RRRR' HH:MI:SS AM), 'DD/...
Categories: DBA Blogs

Unix Strings command and data not encrypted

Tom Kyte - Wed, 2018-11-28 02:06
Hello TOM, I have a problem with the Unix command named STRINGS. If I do not use encryption on my datas, I can read my string datas via Unix, even if these datas were truncated under Oracle. And dtranger, I have their history too... Fir...
Categories: DBA Blogs

Table storage parameters

Tom Kyte - Wed, 2018-11-28 02:06
I've always created tables using the defaults for storage parameters. <code> create table blah ( a number; b varchar2(100) ); </code> however I recently worked with an organization where the following storage parameters were always spe...
Categories: DBA Blogs

Default Value From A Sequence: Sequence Updates On Provided Value Entries With INSERT Through LOOP But Not on Individual INSERTs

Tom Kyte - Wed, 2018-11-28 02:06
I have two samples of code below that use the same table and sequence definitions. The first one inserts using individual insert statements, with the first three inserts giving an override value to the sequenced column and the final insert relyi...
Categories: DBA Blogs

Oracle EBS (R12) On Cloud (OCI): High Level Steps

Online Apps DBA - Wed, 2018-11-28 01:52

OCI is Oracle’s Generation 2 Cloud from Oracle and is recommended Cloud to build & manage EBS R12. This is one thing every Apps DBA & EBS Architect Must Know if running or planning to run Oracle EBS (R12) On Cloud. Check high-level steps to deploy Oracle EBS (R12) on Oracle Cloud Infrastructure (OCI) using […]

The post Oracle EBS (R12) On Cloud (OCI): High Level Steps appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Keine Angst vor Container Technologie (DOAG 2018)

Yann Neuhaus - Wed, 2018-11-28 01:46

Seit 30 Jahren bin ich in der IT-Branche tätig, hatte immer wieder mit Oracle mit RDBMS Systemen zu tun. Nun, seit bald 4 Jahren als Berater bei dbi services habe ich sehr viel Berührung mit Oracle Datenbaken, daher auch mein Besuch der DOAG 2018.

Mit grossem Interesse reiste ich zur DOAG nach Nürnberg und hatte mir vorgenommen zum Thema OpenShift und Container, diverse Sessions zu besuchen.

Warum OpenShift? Nun seit einiger Zeit sehen wir Projekte(PoC) bei unseren Kunden in diesem Bereich. Red Hat bietet eine komplette Lösung an, die alle Komponenten beinhaltet. Ein Start wir deutlich schneller möglich.

Interessanterweise, ist nicht nur Euphorie zu spüren, es gibt auch kritische Stimmen zu diesem Thema. Doch es erinnert mich an die Zeit als die Hardware-Virtualisierung aufkam. Auch damals wurden kritische Fragen gestellt. Wird das funktionieren und ist viel zu komplex! Diese Technologie eignet sich nur für Dienstleister, Cloud-Anbieter etc.

Also Grund genug erste Erfahrungsberichte an der DOAG anzuhören und Vorträge zu diesem Thema zu besuchen.
 

Was ändert sich denn hier?

Nach der Hardware-Virtualisierung folgt nun der nächste Virtualisierungsschritt, Docker (Einsatz von Container).
Der Unterschiede der Hardware-Virtualisierung zu Docker kann am besten mit einer Schematischen Darstellung aufgezeigt werden.

Schematische Darstellung der Hardware-Virtualisierung

Server-Virtualisierung
 

Der Unterschied in der Architektur zwischen Hardware-Virtualisierung und Container

docker_fig1

Grösster Unterschied, bei der Hardware-Virtualisierung hat jeder Virtuelle-Server ein komplettes eigenes Betriebssystem. Durch die Container Architektur, fällt dieser Teil zum grössten Teil weg, was den einzelnen Container deutlich kleiner und vor allem portabler macht. Es werden weniger Ressourcen benötig auf der Infrastruktur, oder auf der selben Infrastruktur können deutlich mehr Containers betrieben werden.
 

OpenShift die Red hat Lösung für Docker hat folgende Architektur

architecture_overview
 

Was erwartet uns mit OpenShift, was müssen wir auf jeden Fall beachten

– Nächster Schritt zum Thema Virtualisierung -> Container
– Komplexe Infrastruktur, bei Red Hat alles aus einer Hand (Inkl. Kubernetes)
– Der Start in die Container Welt, muss sehr gut vorbereitet sein
– Technologie ist noch sehr jung, hier wird sich noch einiges ändern
– Wenn möglich ein PoC durchführen, nicht zu lange warten
– Konzepte und Prozesse werden zwingend benötigt
 

Mein Fazit

Mein erster Besuch an der DOAG hat mir sehr wertvolle Informationen und Erkenntnisse geliefert zu den beiden Theme OpenShift und Containers. Im speziellen die Lösung von Red Hat, mit dieser Technologie werde ich mich in der nächster Zeit beschäftigen. Ich bin sicher das wir hier wieder einmal an einem sehr interessanten Technologie-Wendepunkt stehen, dem Start in die Container Infrastrukturen mit kompletten Lösungen wie OpenShift von Red Hat. Jedoch trotz aller Euphorie, ein start in diese Technologie sollte geplant und kontrolliert erfolgen. Speziell sollten Erfahrungen in einem PoC gesammelt werden. Der Schritt ein OpenShift Infrastruktur in einem produktiven Umfeld einzusetzen, muss basiert auf den Erfahrungen gut geplant und kontrolliert erfolgen um nicht in die gleichen Probleme wie es damals bei der Hardware-Virtualisierung zu laufen!

chaos_container

Für den produktiven Betrieb, braucht es Sicherheit, Stabilität, Kontinuität ebenfalls sollten alle Komponenten aktuell bleiben. Monitoring und Backup/Restore sind ebenso Themen mit denen man sich vor der Inbetriebnahme auseinandersetzen muss. Sicher ermöglicht diese Technologie mehr Tempo, aber es braucht Regelungen und Prozesse damit nicht nach einer gewissen Zeit, die Container Welt plötzlich so wie auf dem Bild oberhalb aussieht!

Cet article Keine Angst vor Container Technologie (DOAG 2018) est apparu en premier sur Blog dbi services.

Counting Rows

Jonathan Lewis - Tue, 2018-11-27 15:08

Here’s another little utility I use from time to time (usually for small tables) to check how many rows there are in each block of the table, and which blocks are used. It doesn’t do anything clever, just call routines in the dbms_rowid package for each rowid in the table:


rem
rem     Rowid_count.sql
rem     Generic code to count rows per block in a table
rem     Ordered by file and block
rem

define m_table = '&1'

spool rowid_count

select 
        dbms_rowid.rowid_relative_fno(rowid)    rel_file_no, 
        dbms_rowid.rowid_block_number(rowid)    block_no,
        count(*)                                rows_starting_in_block
from 
        &m_table        t1
group by 
        dbms_rowid.rowid_relative_fno(rowid), 
        dbms_rowid.rowid_block_number(rowid) 
order by 
        dbms_rowid.rowid_relative_fno(rowid), 
        dbms_rowid.rowid_block_number(rowid)
;


select
        rows_starting_in_block,
        count(*)        blocks
from
        (
        select 
                dbms_rowid.rowid_relative_fno(rowid), 
                dbms_rowid.rowid_block_number(rowid),
                count(*)                                rows_starting_in_block
        from 
                &m_table        t1
        group by 
                dbms_rowid.rowid_relative_fno(rowid), 
                dbms_rowid.rowid_block_number(rowid) 
        )
group by
        rows_starting_in_block
order by
        rows_starting_in_block
;

spool off


And here’s a sample of the output:


REL_FILE_NO   BLOCK_NO ROWS_STARTING_IN_BLOCK
----------- ---------- ----------------------
	 22	   131			  199
	 22	   132			  199
	 22	   133			  199
	 22	   134			  199
	 22	   135			   88
	 22	   138			  111

6 rows selected.


ROWS_STARTING_IN_BLOCK	   BLOCKS
---------------------- ----------
		    88		1
		   111		1
		   199		4

3 rows selected.


Obviously it could take quite a lot of I/O and CPU to run the two queries against a large table – generally I use it when I want to pick a block to dump afterwards.

Pages

Subscribe to Oracle FAQ aggregator