Feed aggregator

OUAF 4.3.0.4.0 On its way

Anthony Shorten - Sun, 2017-03-05 15:32

We are currently putting the final touches on the next service pack (SP4) for the latest Oracle Utilities Application Framework release (4.3). This is a very exciting release for us with a lot of functionality that we are using for the cloud implementations of our products being made available to customers on cloud as well as customers on non-cloud implementations.

Over the next few weeks I will be releasing a series of articles, highlighting some of  the major changes we have introduced into the service pack that will be of interest to people in the field for their non-cloud implementations.

The release adds new functionality, updates existing functionality and retires functionality that we have previously announced as deprecated. You will start seeing products released based upon this new service pack in the upcoming months.

It is a very exciting time for Oracle Utilities and this release will be a foundation for even more exciting functionality we have planned going forward.

OUAF 4.3.0.4.0 On its way

Anthony Shorten - Sun, 2017-03-05 15:32

We are currently putting the final touches on the next service pack (SP4) for the latest Oracle Utilities Application Framework release (4.3). This is a very exciting release for us with a lot of functionality that we are using for the cloud implementations of our products being made available to customers on cloud as well as customers on non-cloud implementations.

Over the next few weeks I will be releasing a series of articles, highlighting some of  the major changes we have introduced into the service pack that will be of interest to people in the field for their non-cloud implementations.

The release adds new functionality, updates existing functionality and retires functionality that we have previously announced as deprecated. You will start seeing products released based upon this new service pack in the upcoming months.

It is a very exciting time for Oracle Utilities and this release will be a foundation for even more exciting functionality we have planned going forward.

EMEA Edge Conference

Anthony Shorten - Sun, 2017-03-05 15:24

I will be attending the EMEA Edge Conference in Reading UK which will be conducted on April 25-26th 2017. I am planning to hold the same technical sessions as I did at the AMER conference earlier this year. As with that conference the sessions are a combination of what we have achieved, what we are planning and some tips and techniques to take back to your implementations of the products.

I would like to thank the participants of my AMER and JAPAC sessions who provided me with valuable insight into the market which we can factor into our ongoing roadmaps.

The sessions we are planning at outlined in my previous blog entry on the edge technical stream.

EMEA Edge Conference

Anthony Shorten - Sun, 2017-03-05 15:24

I will be attending the EMEA Edge Conference in Reading UK which will be conducted on April 25-26th 2017. I am planning to hold the same technical sessions as I did at the AMER conference earlier this year. As with that conference the sessions are a combination of what we have achieved, what we are planning and some tips and techniques to take back to your implementations of the products.

I would like to thank the participants of my AMER and JAPAC sessions who provided me with valuable insight into the market which we can factor into our ongoing roadmaps.

The sessions we are planning at outlined in my previous blog entry on the edge technical stream.

undo tablespace

Tom Kyte - Sun, 2017-03-05 12:26
Hi Tom, I have few questions related to undo tablespace. 1)how to startup the database if undo datafile lost and no backup(without undo how uncommitted transactions will be rolled back) 2)In rac if one undo datafile get corrupted,only that i...
Categories: DBA Blogs

Cursor with FOR UPDATE NOWAIT clause UTL_FILE.FOREMOVE ORA-29285: file write error-

Tom Kyte - Sun, 2017-03-05 12:26
Dear Experts, I am having problem with a oracle proc inside package which writes named xxx.txt file on the linux server. cursor with FOR UPDATE NOWAIT clause fetch data from the tables and write data into the file using the UTL_FILE oracle functio...
Categories: DBA Blogs

Bad, Bad data

Kubilay Çilkara - Sat, 2017-03-04 15:10



One should feel really sorry about anyone who will rely on filtering and making a decision based on bad, bad data. It is going to be a bad decision.


This is serious stuff. I read the other day a recent study by IBM which shows that "Bad Data" costs US $3.1 trillion per year!


OK, let's say you don't mind the money and have money to burn, how about the implications of using the bad data? As the article hints these could be misinformation, wrong calculations, bad products, weak decisions mind you these will be weak/wrong 'data' driven decisions. Opportunities can be lost here.


So why all this badness, isn't it preventable? Can't we not do something about it?


Here are some options


1) Data Cleansing: This is a reactive solution where you clean, disambiguate, correct, change the bad data and put the right values in the database after you find the bad data. This is something you do when is too late and you have bad data already. Better late than never. A rather expensive and very time consuming solution. Nevermind the chance that you can still get it wrong. There are tools out there which you can buy and can help you do data cleansing. These tools will 'de-dupe' and correct the bad data up to a point. Of course data cleansing tools alone are  not enough, you will still need those data engineers and data experts who know your data or who can study your data to guide you. Data cleansing is the damage control option. It is a solution hinted in the article as well.


2) Good Database Design: Use constraints! My favourite option. Constraints are key at the design time of the database, do put native database checks and constraints in the database entities and tables to guarantee the entity integrity and referential integrity of your database schema, validate more! Do not just create plain simple database tables, always think of ways to enforce the integrity of your data. Do not rely only on code. Prevent NULLS or empty strings as much as you can at database design time, put unique indexes and logical check constraints inside the database. Use database tools and features you are already paying for in your database license and already available to you, do not re-invent the wheel, validate your data. This way you will prevent the 'creation' of bad data at the source! Take a proactive approach. In projects don't just skip the database constraints and say I will do it in the app or later. You know you will not, chances are you will forget it in most of the cases. Also apps can change, but databases tend to outlast the apps. Look at a primer on how to do Database Design


My modus operandi is option 2, a Good Database Design and data engineering can save you money, a lot of money, don't rush into projects with neglecting or skipping database tasks, engage the data experts, software engineers with the business find out the requirements, talk about them, ask many questions and do data models. Reverse engineer everything in your database, have a look. Know your data! That's the only way to have good, integral and reliable true data, and it will help you and your customers win.

Categories: DBA Blogs

Purging Unified Audit Trail in 12cR1

Yann Neuhaus - Sat, 2017-03-04 11:24

When you want to empty a table you have two methods: delete and truncate. If, for any reason (see previous post) the Unified Audit Trail has become too big, you cannot directly delete or truncate the table. You must call the dbms_audit_mgmt.clean_audit_trail. But then you want to know if it will do slow deletes or quick truncates. Let’s trace it.

I have filled my Unified Audit Trail with hundred of thousands failed logins:
SQL> select unified_audit_policies,action_name,count(*) from unified_audit_trail group by unified_audit_policies,action_name;
 
UNIFIED_AUDIT_POLICIES ACTION_NAME COUNT(*)
---------------------------------------- -------------------- ----------
EXECUTE 2
ORA_LOGON_FAILURES LOGON 255799

We have two methods to purge: purge records older than a timestamp or purge all.

Purge old

Auditing is different than logging. It’s a security feature. The goal is not to keep only recent information by specifying a retention. The goal is to read, process and archive the records, and then set a timestamp to the high water mark that has been processed. Then a background job will delete what is before this timestamp.

I set the timestamp to 6 hours before now

SQL> exec dbms_audit_mgmt.set_last_archive_timestamp(audit_trail_type=>dbms_audit_mgmt.audit_trail_unified
,last_archive_time=>sysdate-6/24);
PL/SQL procedure successfully completed.

And call the clean procedure:

SQL> exec dbms_audit_mgmt.clean_audit_trail(audit_trail_type=>dbms_audit_mgmt.audit_trail_unified
,use_last_arch_timestamp=TRUE);
PL/SQL procedure successfully completed.

This was fast but let’s look at the tkprof. Besides some select, I see a delete on the CLI_SWP$ table that stores the Unified Audit Trail in Secure File LOBs

delete from "CLI_SWP$2f516430$1$1" partition("HIGH_PART")
where
max_time < :1
 
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.47 1.82 20 650 47548 6279
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.47 1.82 20 650 47548 6279
 
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 7 (recursive depth: 1)
Number of plan statistics captured: 1
 
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 DELETE CLI_SWP$2f516430$1$1 (cr=650 pr=20 pw=0 time=1827790 us)
6279 6279 6279 PARTITION RANGE SINGLE PARTITION: 1 1 (cr=248 pr=0 pw=0 time=15469 us cost=5 size=18020 card=530)
6279 6279 6279 TABLE ACCESS FULL CLI_SWP$2f516430$1$1 PARTITION: 1 1 (cr=248 pr=0 pw=0 time=10068 us cost=5 size=18020 card=530)

I will not go into the detail there. This delete may be optimized (120000 audit trail records were actually deleted here behind those 6000 rows. This table is partitioned, and we can expect that old partitions are truncated but there are many bugs with that. On lot of environments we see all rows in HIGH_PART.
This is improved in 12cR2 and will be the subject of a future post. I you have a huge audit trail to purge, then conventional delete is not optimal.

Purge all

I have still lot of rows remaining.

SQL> select unified_audit_policies,action_name,count(*) from unified_audit_trail group by unified_audit_policies,action_name;
 
UNIFIED_AUDIT_POLICIES ACTION_NAME COUNT(*)
---------------------------------------- -------------------- ----------
EXECUTE 4
ORA_LOGON_FAILURES LOGON 136149

When purging all without setting a timestamp, I expect a truncate which is faster than deletes. Let’s try it and trace it.

SQL> exec dbms_audit_mgmt.clean_audit_trail(audit_trail_type=>dbms_audit_mgmt.audit_trail_unified
,use_last_arch_timestamp=FALSE);
PL/SQL procedure successfully completed.

First, there seem to be an internal log acquired:
SELECT LOCKID FROM DBMS_LOCK_ALLOCATED WHERE NAME = :B1 FOR UPDATE
UPDATE DBMS_LOCK_ALLOCATED SET EXPIRATION = SYSDATE + (:B1 /86400) WHERE ROWID = :B2

Then a partition split:
alter table "CLI_SWP$2f516430$1$1" split partition high_part at (3013381) into (partition "PART_6", partition high_part lob(log_piece) store as securefile (cache logging tablespace SYSAUX) tablespace "SYSAUX")

The split point is the current timestamp SCN:

SQL> select scn_to_timestamp(3013381) from dual;
 
SCN_TO_TIMESTAMP(3013381)
---------------------------------------------------------------------------
02-MAR-17 05.59.06.000000000 PM

This is the time when I’ve run the purge and this is probably used to ‘truncate’ all previous partition but keep the on-going one.

Then , there is no TRUNCATE in the trace, but something similar: some segments are dropped:

delete from seg$
where
ts#=:1 and file#=:2 and block#=:3
 
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 6 0.00 0.00 0 18 12 6
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7 0.00 0.00 0 18 12 6

There is finally a delete, but with no rows to delete as the rows were in the dropped segments:

delete from "CLI_SWP$2f516430$1$1" partition("HIGH_PART")
where
max_time < :1
 
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 3 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 3 0 0
 
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 7 (recursive depth: 1)
Number of plan statistics captured: 1
 
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 DELETE CLI_SWP$2f516430$1$1 (cr=3 pr=0 pw=0 time=61 us)
0 0 0 PARTITION RANGE SINGLE PARTITION: 1 1 (cr=3 pr=0 pw=0 time=57 us cost=5 size=2310 card=33)
0 0 0 TABLE ACCESS FULL CLI_SWP$2f516430$1$1 PARTITION: 1 1 (cr=3 pr=0 pw=0 time=48 us cost=5 size=2310 card=33)

So what?

Cleaning the Unified Audit Trail is done with internal statements but looks like a delete when use_last_arch_timestamp=TRUE or a truncate when use_last_arch_timestamp=FALSE. This means that we can use this procedure when AUDSYS has grown too much. However, there are a few bug with this internal table, partitioned even when partitioning is not allowed. The implementation has changed in 12.2 so the next blog post will show the same test on 12cR2.

 

Cet article Purging Unified Audit Trail in 12cR1 est apparu en premier sur Blog dbi services.

12cR2 new features for Developers and DBAs - Here is my pick (Part 2)

Syed Jaffar - Sat, 2017-03-04 10:29
In Part 1, I have outlined a few (my pick) 12cR2 new features useful for Developers and DBAs. In the part 2, I am going to discuss a few more new features.
Read/Write and Read-Only Instances
Read-write and read-only database instances of the same primary database can coexist in an Oracle Flex Cluster.
Advanced Index Compression
Prior to this release, the only form of advanced index compression was low compression. Now you can also specify high compression. High compression provides even more space savings than low compression.
PDBs Enhancements
  • I/O Rate Limits for PDBs
  • Different character sets of PDBs in a CDB
  • PDB refresh to periodically propagate changes from a source PDB to its cloned copy
  • CONTAINERS hint : When a CONTAINERS ()query is submitted, recursive SQL statements are generated and executed in each PDB. Hints can be passed to these recursive SQL statements by using the CONTAINERS statement-level hint. 
  • Cloning PDB no longer to be in R/W mode : Cloning of a pluggable database (PDB) resolves the issue of setting the source system to read-only mode before creating a full or snapshot clone of a PDB.
  • Near Zero Downtime PDB Relocation:This new feature significantly reduces downtime by leveraging the clone functionality to relocate a pluggable database (PDB) from one multitenant container database (CDB) to another CDB. The source PDB is still open and fully functional while the actual cloning operation is taking place.
  • Proxy PDB: A proxy pluggable database (PDB) provides fully functional access to another PDB in a remote multitenant container database (CDB). This feature enables you to build location-transparent applications that can aggregate data from multiple sources that are in the same data center or distributed across data centers.
Oracle Data Pump Parallel Export of Metadata: The PARALLEL parameter for Oracle Data Pump, which previously applied only to data, is extended to include metadata export operations. The performance of Oracle Data Pump export jobs is improved by enabling the use of multiple processes working in parallel to export metadata.
Renaming Data Files During Import
Oracle RAC :
  • Server Weight-Based Node Eviction :Server weight-based node eviction acts as a tie-breaker mechanism in situations where Oracle Clusterware needs to evict a particular node or a group of nodes from a cluster, in which all nodes represent an equal choice for eviction. In such cases, the server weight-based node eviction mechanism helps to identify the node or the group of nodes to be evicted based on additional information about the load on those servers. Two principle mechanisms, a system inherent automatic mechanism and a user input-based mechanism exist to provide respective guidance.
  • Load-Aware Resource Placement : Load-aware resource placement prevents overloading a server with more applications than the server is capable of running. The metrics used to determine whether an application can be started on a given server, either as part of the startup or as a result of a failover, are based on the anticipated resource consumption of the application as well as the capacity of the server in terms of CPU and memory.
Enhanced Rapid Home Provisioning and Patch Management
TDE Tablespace Live Conversion: You can now encrypt, decrypt, and rekey existing tablespaces with Transparent Data Encryption (TDE) tablespace live conversion. A TDE tablespace can be easily deployed, performing the initial encryption that migrates to an encrypted tablespace with zero downtime. This feature also enables automated deep rotation of data encryption keys used by TDE tablespace encryption in the background with zero downtime.
Fully Encrypted Database: Transparent Data Encryption (TDE) tablespace encryption is applied to database internals including SYSTEM, SYSAUX, and UNDO.
TDE Tablespace Offline Conversion: This release introduces new SQL commands to encrypt tablespace files in place with no storage overhead. You can do this on multiple instances across multiple cores. Using this feature requires downtime, because you must take the tablespace temporarily offline. With Data Guard configurations, you can either encrypt the physical standby first and switchover, or encrypt the primary database, one tablespace at a time.

Is your DBA_FEATURE_USAGE_STATISTICS up-to-date?

Yann Neuhaus - Sat, 2017-03-04 05:35

Last day we were doing a licensing review for a client. As many dbas may already know, this require to execute some oracle scripts at OS level and database level.
Among these scripts we have the script options_packs_usage_statistics.sql (docId 1317265.1) which is an official oracle script to check the usage of separately licensed Oracle Database Options/Management Packs
This script is using the DBA_FEATURE_USAGE_STATISTICS table to retrieve info. And sometimes it may happen that data of this table are not recent.
One important thing is that the DBA_FEATURE_USAGE_STATISTICS are based on the most recent sample in the column LAST_SAMPLE_DATE. In our case we got following results (outputs are truncated).

SYSDATE |
-------------------|
2017.02.17_13.36.44|


PRODUCT |LAST_SAMPLE_DATE |
-------------------------------|-------------------|
Active Data Guard |2014.01.02_13.37.53|
Advanced Analytics |2014.01.02_13.37.53|
Advanced Compression |2014.01.02_13.37.53|
Advanced Security |2014.01.02_13.37.53|
Database Vault |2014.01.02_13.37.53|
Diagnostics Pack |2014.01.02_13.37.53|
Label Security |2014.01.02_13.37.53|
OLAP |2014.01.02_13.37.53|
Partitioning |2014.01.02_13.37.53|
Real Application Clusters |2014.01.02_13.37.53|
Real Application Testing |2014.01.02_13.37.53|
Tuning Pack |2014.01.02_13.37.53|
.Exadata |2014.01.02_13.37.53|

If we compare sysdate and the date of the last_sample_date, we can see that we have to manually refresh our DBA_FEATURE_USAGE_STATISTICS data.
One way to do this is to run the procedure

exec dbms_feature_usage_internal.exec_db_usage_sampling(SYSDATE);

In our case the procedure did not refresh our data despite the fact that there was any error and we received message that procedure was successfully executed.

SQL> exec dbms_feature_usage_internal.exec_db_usage_sampling(SYSDATE);
PL/SQL procedure successfully completed.


SQL> select max(last_sample_date) from dba_feature_usage_statistics order by 1;
MAX(LAST_
---------
02-JAN-14

Following oracle document 1629485.1 we were able to refresh the last_sample_date using this ALTER SESSION
code>
SQL> alter session set “_SWRF_TEST_ACTION”=53;
Session altered.


SQL> alter session set NLS_DATE_FORMAT='DD/MM/YYYY HH24:MI:SS';
Session altered.


SQL> select MAX(LAST_SAMPLE_DATE) from dba_feature_usage_statistics;
MAX(LAST_SAMPLE_DAT
-------------------
16/02/2017 13:44:46

Hope this article may help

 

Cet article Is your DBA_FEATURE_USAGE_STATISTICS up-to-date? est apparu en premier sur Blog dbi services.

Sharding with Oracle 12c R2 Part I

Yann Neuhaus - Sat, 2017-03-04 05:33

Oracle 12.2 comes with many new features. In this article we are going to talk about sharding. It is a database scaling technique based on horizontal partitioning of data across multiple oracle databases called sharded databases (SDB). Each shard contains the table with the same columns but a different subset of rows. Sharding can be represented like this
shar1
For DBA: SDB is in fact multiples databases that can be managed collectively or individually
There are 3 methods of sharding System-managed sharding,User-defined sharding and Composite sharding
In this article we are using System-managed sharding where data are automatically distributed across shards using partitioning by consistent hash. It is the most used. We will just demonstrate how it is possible to create shards using oracle. In next articles we will show how we can connect to these shards and how it is possible to add new shards.
What do we need?
Oracle Database 12c Release 2 : linuxx64_12201_database.zip
Oracle Database 12c Release 2 Global Service Manager : linuxx64_12201_gsm.zip

In this demo we use following configuration to create sharded databases on sharddemo2 and sharddemo3.
VM sharddemo1:  catalog
VM sharddemo2: shard
VM sharddemo3: shard

Oracle 12.2 should be installed on all servers. We will not show oracle software installation.
GSM software should be installed on the catalog sharddemo1.

After unzipping the file just launch the installer, just launch the runInstaller
[oracle@sharddemo1 gsm122]$ ./runInstaller

gsm1

gsm2

gsm3

gsm4

gsm5


[root@sharddemo1 oracle122]# /u01/app/oracle/product/12.2.0.1/gsmhome_1/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.2.0.1/gsmhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@sharddemo1 oracle122]#

gsm6

The second step is to create the catalog database on sharddemo1. We will name it ORCLCAT (NON-CDB). Some database parameters need to be configured for ORCLCAT. Database creation is not shown here.

SQL> alter system set db_create_file_dest='/u01/app/oracle/oradata/’;
System altered.
SQL> alter system set open_links=16 scope=spfile;
System altered.
SQL> alter system set open_links_per_instance=16 scope=spfile;
System altered.

Oracle 12.2 database comes with a schema  gsmcatuser schema. This schema is used by the shard director while connecting to the shard catalog database. This schema is locked by default, so we have to unlock it.

SQL> alter user gsmcatuser account unlock;
User altered.
SQL> alter user gsmcatuser identified by root;
User altered.

We also have to create the gsm  administrator schema (mygdsadmin in our case) and give him  the required  privileges

SQL> create user mygdsadmin identified by root;
User created.
SQL> grant connect, create session, gsmadmin_role to mygdsadmin;
Grant succeeded.
SQL> grant inherit privileges on user SYS to GSMADMIN_INTERNAL;
Grant succeeded.
The next step is to configure the scheduler on the shard catalog by setting  the remote scheduler’s http port and the agent registration password on the shard catalog database ORCLCAT

SQL> execute dbms_xdb.sethttpport(8080);
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL> @?/rdbms/admin/prvtrsch.plb


SQL> exec DBMS_SCHEDULER.SET_AGENT_REGISTRATION_PASS('welcome');
PL/SQL procedure successfully completed.
SQL>

We have now to register sharddemo2 and sharddemo3 agents in the scheduler. The executable schagent in $ORACLE_HOME/bin is used. After registration, agents should be started
Below registration for sharddemo2

[oracle@sharddemo2 ~]$ schagent -start
Scheduler agent started using port 21440
[oracle@sharddemo2 ~]$ echo welcome | schagent -registerdatabase sharddemo1 8080
Agent Registration Password ?
Oracle Scheduler Agent Registration for 12.2.0.1.2 Agent
Agent Registration Successful!
[oracle@sharddemo2 ~]$

After agent registration, corresponding directories for database must be created on sharddemo2 and shardddemo3

[oracle@sharddemo2 ~]$ mkdir /u01/app/oracle/oradata
[oracle@sharddemo2 ~]$ mkdir /u01/app/oracle/fast_recovery_area

Now it’s time to launch the Global Data Services Control Utility (GDSCTL) on sharddemo1. GDSCTL is in $GSM_HOME/bin in our case /u01/app/oracle/product/12.2.0.1/gsmhome_1/bin

[oracle@sharddemo1 ~]$ gdsctl
GDSCTL: Version 12.2.0.1.0 - Production on Thu Mar 02 13:53:50 CET 2017
Copyright (c) 2011, 2016, Oracle. All rights reserved.
Welcome to GDSCTL, type "help" for information.
Warning: current GSM name is not set automatically because gsm.ora contains zero or several GSM entries. Use "set gsm" command to set GSM for the session.
Current GSM is set to GSMORA
GDSCTL>

And to create the shardcatalog

GDSCTL>create shardcatalog -database sharddemo1:1521:ORCLCAT -chunks 12 -user mygdsadmin/root -sdb cust_sdb -region region1
Catalog is created
GDSCTL>

Now let’s create and start the shard director. The listener of the gsm should use a free port. In our case the port is 1571

GDSCTL>add gsm -gsm region1_director -listener 1571 -pwd root -catalog sharddemo1:1521:ORCLCAT -region region1
GSM successfully added


GDSCTL>start gsm -gsm region1_director
GSM is started successfully
GDSCTL>


GDSCTL>status gsm
Alias REGION1_DIRECTOR
Version 12.2.0.1.0
Start Date 02-MAR-2017 14:03:36
Trace Level off
Listener Log File /u01/app/oracle/diag/gsm/sharddemo1/region1_director/alert/log.xml
Listener Trace File /u01/app/oracle/diag/gsm/sharddemo1/region1_director/trace/ora_21814_140615026692480.trc
Endpoint summary (ADDRESS=(HOST=sharddemo1.localdomain)(PORT=1571)(PROTOCOL=tcp))
GSMOCI Version 2.2.1
Mastership Y
Connected to GDS catalog Y
Process Id 21818
Number of reconnections 0
Pending tasks. Total 0
Tasks in process. Total 0
Regional Mastership TRUE
Total messages published 0
Time Zone +01:00
Orphaned Buddy Regions:
None
GDS region region1
GDSCTL>

We also have to set the scheduler agent password to “welcome” in gdsctl

GDSCTL>modify catalog -agent_password welcome
The operation completed successfully
GDSCTL>

The OS credential for the user “oracle” must be defined. We are using the same OS credential for all the shards

GDSCTL>add credential -credential oracle_cred -osaccount oracle -ospassword root
The operation completed successfully
GDSCTL>

Before deploying the shards we have to define metadata for them.

GDSCTL>set gsm -gsm region1_director
GDSCTL>connect mygdsadmin/root
Catalog connection is established
GDSCTL>


GDSCTL>add shardgroup -shardgroup shgrp1 -deploy_as primary -region region1
The operation completed successfully
GDSCTL>


GDSCTL>add invitednode sharddemo2


GDSCTL>create shard -shardgroup shgrp1 -destination sharddemo2 -credential oracle_cred
The operation completed successfully
DB Unique Name: sh1
GDSCTL>


GDSCTL>add invitednode sharddemo3


GDSCTL>create shard -shardgroup shgrp1 -destination sharddemo3 -credential oracle_cred
The operation completed successfully
DB Unique Name: sh21
GDSCTL>

We can then verify the status of our configuration

GDSCTL>config shard
Name Shard Group Status State Region Availability
---- ----------- ------ ----- ------ ------------
sh1 shgrp1 U none region1 -
sh21 shgrp1 U none region1 -

If there is no error it’s time to deploy our shards. Deployment is the last step before creating the schema we will use for System Managed Sharding

GDSCTL>deploy
deploy: examining configuration...
deploy: deploying primary shard 'sh1' ...
deploy: network listener configuration successful at destination 'sharddemo2'
deploy: starting DBCA at destination 'sharddemo2' to create primary shard 'sh1' ...
deploy: deploying primary shard 'sh21' ...
deploy: network listener configuration successful at destination 'sharddemo3'
deploy: starting DBCA at destination 'sharddemo3' to create primary shard 'sh21' ...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: DBCA primary creation job succeeded at destination 'sharddemo3' for shard 'sh21'
deploy: waiting for 1 DBCA primary creation job(s) to complete...
deploy: DBCA primary creation job succeeded at destination 'sharddemo2' for shard 'sh1'
deploy: requesting Data Guard configuration on shards via GSM
deploy: shards configured successfully
The operation completed successfully
GDSCTL>

The command may take some times. Once done running again the config command may return.

GDSCTL>config shard
Name Shard Group Status State Region Availability
---- ----------- ------ ----- ------ ------------
sh1 shgrp1 Ok Deployed region1 ONLINE
sh21 shgrp1 Ok Deployed region1 ONLINE

We should have two instances running on sharddemo2 and sharddemo3: sh1 and sh21.
Now that shards are deployed let’s create in the shardcatalog database ORCLCAT the schema we will use for sharding. Here the user is called shard_user.

SQL> show parameter instance_name
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
instance_name string ORCLCAT
SQL>


SQL>alter session enable shard ddl;
SQL>create user user_shard identified by user_shard;
SQL>grant connect, resource, alter session to user_shard;
SQL>grant execute on dbms_crypto to user_shard;
SQL>grant create table, create procedure, create tablespace, create materialized view to user_shard;
SQL>grant unlimited tablespace to user_shard;
SQL>grant select_catalog_role to user_shard;
SQL>grant all privileges to user_shard;
SQL>grant gsmadmin_role to user_shard;
SQL>grant dba to user_shard;

In a sharding environment, we have two types of tables
Sharded tables : data are distributed in the different shards
Duplicated tables: data are duplicated in the different shards
Let’s create tablespaces for each type of tables

SQL> CREATE TABLESPACE SET TAB_PRIMA_SET using template (datafile size 100m autoextend on next 10M maxsize unlimited extent management local segment space management auto );
Tablespace created.


SQL> CREATE TABLESPACE TAB_PRODUCT datafile size 100m autoextend on next 10M maxsize unlimited extent management local uniform size 1m;
Tablespace created

Now under user_shard schema let’s create Sharded and Duplicated tables in ORCLCAT

CREATE SHARDED TABLE Customers
(
CustId VARCHAR2(60) NOT NULL,
FirstName VARCHAR2(60),
LastName VARCHAR2(60),
Class VARCHAR2(10),
Geo VARCHAR2(8),
CustProfile VARCHAR2(4000),
CONSTRAINT pk_customers PRIMARY KEY (CustId),
CONSTRAINT json_customers CHECK (CustProfile IS JSON)
) TABLESPACE SET TAB_PRIMA_SET PARTITION BY CONSISTENT HASH (CustId) PARTITIONS AUTO;
Table created.


CREATE SHARDED TABLE Orders
(
OrderId INTEGER NOT NULL,
CustId VARCHAR2(60) NOT NULL,
OrderDate TIMESTAMP NOT NULL,
SumTotal NUMBER(19,4),
Status CHAR(4),
constraint pk_orders primary key (CustId, OrderId),
constraint fk_orders_parent foreign key (CustId)
references Customers on delete cascade
) partition by reference (fk_orders_parent);


CREATE DUPLICATED TABLE Products
(
ProductId INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
Name VARCHAR2(128),
DescrUri VARCHAR2(128),
LastPrice NUMBER(19,4)
) TABLESPACE TAB_PRODUCT;
Table created.

Some checks can be done on both instances (ORCLCAT, sh1, sh21) to verify that tablespaces, sharded tables, duplcated tables are created for example.

SQL> select name from v$database;
NAME
---------
ORCLCAT

SQL> select TABLESPACE_NAME, BYTES/1024/1024 MB from sys.dba_data_files order by tablespace_name;
TABLESPACE_NAME MB
------------------------------ ----------
SYSAUX 660
SYSTEM 890
TAB_PRIMA_SET 100
TAB_PRODUCT 100
UNDOTBS1 110
USERS 5

on sh1 and sh21

select TABLESPACE_NAME, BYTES/1024/1024 MB from sys.dba_data_files order by tablespace_name;
C001TAB_PRIMA_SET 100
C002TAB_PRIMA_SET 100
C003TAB_PRIMA_SET 100
C004TAB_PRIMA_SET 100
C005TAB_PRIMA_SET 100
C006TAB_PRIMA_SET 100
SYSAUX 660
SYSTEM 890
SYS_SHARD_TS 100
TAB_PRIMA_SET 100
TAB_PRODUCT 100
UNDOTBS1 115
USERS 5

On sharddemo2 (sh1) for example verify that the chunks and chunk tablespaces are created

SQL> select table_name, partition_name, tablespace_name from dba_tab_partitions where tablespace_name like 'C%TAB_PRIMA_SET' order by tablespace_name;
CUSTOMERS CUSTOMERS_P1 C001TAB_PRIMA_SET
ORDERS CUSTOMERS_P1 C001TAB_PRIMA_SET
CUSTOMERS CUSTOMERS_P2 C002TAB_PRIMA_SET
ORDERS CUSTOMERS_P2 C002TAB_PRIMA_SET
CUSTOMERS CUSTOMERS_P3 C003TAB_PRIMA_SET
ORDERS CUSTOMERS_P3 C003TAB_PRIMA_SET
CUSTOMERS CUSTOMERS_P4 C004TAB_PRIMA_SET
ORDERS CUSTOMERS_P4 C004TAB_PRIMA_SET
CUSTOMERS CUSTOMERS_P5 C005TAB_PRIMA_SET
ORDERS CUSTOMERS_P5 C005TAB_PRIMA_SET
CUSTOMERS CUSTOMERS_P6 C006TAB_PRIMA_SET
ORDERS CUSTOMERS_P6 C006TAB_PRIMA_SET

On ORCLCAT

SQL> select table_name from user_tables;
TABLE_NAME
---------------------------------------------------
MLOG$_PRODUCTS
PRODUCTS
CUSTOMERS
ORDERS
RUPD$_PRODUCTS

On sh1

SQL> select table_name from user_tables;
TABLE_NAME
------------------------------------------------------------
PRODUCTS
CUSTOMERS
ORDERS

on sh21

SQL> select table_name from user_tables;
TABLE_NAME
--------------------------------------------------------------
PRODUCTS
CUSTOMERS
ORDERS

Using gdsctl on sharddemo1 we can also see ddl executed by using the show ddl command on the gsm interface

GDSCTL>show ddl
id DDL Text Failed shards
-- -------- -------------
6 grant select_catalog_role to user_shard;
7 grant all privileges to user_shard;
8 grant gsmadmin_role to user_shard;
9 grant dba to user_shard;
10 CREATE TABLESPACE SET TAB_PRIMA_SET u...
11 CREATE TABLESPACE TAB_PRODUCT datafil...
12 CREATE SHARDED TABLE Customers ( Cust...
13 CREATE SHARDED TABLE Orders ( OrderId...
14 create database link "PRODUCTSDBLINK@...
15 CREATE MATERIALIZED VIEW "PRODUCTS" ...

And now our sharding should work. After inserting some data we can see that for duplicated tables whole data are replicated into the different shards
On ORCLCAT for example the number of rows for table products is 9

SQL> select count(*) from products;
COUNT(*)
----------
9

On sh1 as product is a duplucated table, the number of rows should be 9

SQL> select count(*) from products;
COUNT(*)
----------
9

Same for table product in on sh21

SQL> select count(*) from products;
COUNT(*)
----------
9

For sharded tables, we can see that rows are distributed

On ORCLCAT

SQL> select count(*) from customers;
COUNT(*)
----------
14

On sh1

SQL> select count(*) from customers;
COUNT(*)
----------
6

On sh21 number of rows of customers should be 8

SQL> select count(*) from customers;
COUNT(*)
----------
8

Conclusion
In this first part we talked about sharding configuration. We have seen how using Oracle Global Data Services we can create shards. In a second part we will see how to connect to shards and how scalibilty is possible in a shard environment

 

Cet article Sharding with Oracle 12c R2 Part I est apparu en premier sur Blog dbi services.

How to embed HTTP content inside a HTTPS webpage / Mixed content problems

Dietrich Schroff - Sat, 2017-03-04 03:27
If you are running a webpage and you decide to move to SSL protection you can encounter the following problem: Inside your webpage you are using tags like "iframe", "script" or "link" pointing to HTTP servers. This is considered as mixed active content (mozilla):

Mixed active content is content that has access to all or parts of the Document Object Model of the HTTPS page. This type of mixed content can alter the behavior of the HTTPS page and potentially steal sensitive data from the user. Hence, in addition to the risks described for mixed display content above, mixed active content is vulnerable to a few other attack vectors.
And this will not work...

The best solution is: change all links from HTTP to HTTPS and you are done.

But there are still websites which offer their content in HTTP only. If you really trust them, you can do the following:
Add the link inside a https proxy like https://ssl-proxy.my-addr.org/myaddrproxy.php/http/yourlink

Of course this is not an excellent solution, but a workaround which allows you to protect your website and if you seperate this solution from the pages, which deal with sensitive content you should be fine...

DBMS_COMPARISON: ora-23626: 'schema.indexname' not eligible index error

Tom Kyte - Fri, 2017-03-03 23:46
Hi Tom, I have a very large table with over 850 million rows of data. We are using CDC to extract the data from the source system to a target for publication and etl to a datawarehouse and ODS. I have a requirement to run periodic checks to ensu...
Categories: DBA Blogs

Why won't my table go in memory?

Tom Kyte - Fri, 2017-03-03 23:46
I downloaded the developers days VM with the latest 12.2 installation on it, and wanted to try out the in memory feature of 12c. I've read the documentation and done everything I think I need to, however unfortunately my table doesn't seem to want to...
Categories: DBA Blogs

SGA memory dynamic change

Tom Kyte - Fri, 2017-03-03 23:46
hi Tom, there is a question We have DB with following SGA info: SQL> select * from v$sgainfo; NAME BYTES RESIZEABLE -------------------------------- ---------- ---------- Fixed SGA Size 22...
Categories: DBA Blogs

Script to allow specific users the ability to kill a session

Tom Kyte - Fri, 2017-03-03 23:46
Hi, Is there a script available that can kill a session but will only allow specific users (pre-defined in the script) the capability to perform the kill? If not, how can this be performed?
Categories: DBA Blogs

Compile procedure automatically -- how to avoid cascading invalidations

Tom Kyte - Fri, 2017-03-03 23:46
I've two procedures: A calls B to do something. If I compile B, then A will become invalid. Can I have any setting in the database in order to compile A automatically when B is compiled? Thank you for your sincere help!
Categories: DBA Blogs

Amazon Launches Prime in India. Can Flipkart Stay ‘First’?

Abhinav Agarwal - Fri, 2017-03-03 20:33
O
n the 27th of June, 2016, Amazon launched the first of its first AWS (Amazon Web Services) data centers in India, in Mumbai.

Amazon India announcing the launch of Prime (July 26, 2016) Less than a month later, on the 26th of July, 2016, Amazon launched Amazon Prime in India. After a free, trial period of 60 days, customers would be able to sign up for what it calls a “special, introductory price” of ₹499 a year. Prime Video was not included in Prime at the time of launch.






 comScore chart showing Amazon as leading e-commerce site (Dec 2015)e-mail from Amazon founder Jeff Bezos (December 2015)This is a significant, though not unexpected, step by Amazon India in its battle to gain primacy in the Indian e-commerce space. In Dec 2015, it announced that it had become the “most visited e-commerce site in India”, and offered a gift card worth ₹200 for customers (with some terms and conditions). It was also in some ways a sleight of hand since it did not include visitors via mobile apps. That also changed in July 2016 when an app data tracker stated that Amazon had become the most downloaded app on the Google and Apple app stores in India.

Amazon India website (July 26, 2016)Flipkart had launched its version of Amazon Prime in May 2014. Called “Flipkart First”, it also was available for an annual price of ₹500. But for reasons best known to Flipkart, after an initial flurry of promotion and advertising, including a three-month giveaway to 75000 customers, Flipkart did not seem to pursue it with any sort of vigor. Customers complained of many products being excluded from Flipkart First, and in the absence of any sustained campaign to make customers aware of the programme, it has slowly faded from memory. Flipkart also has not shared any numbers in some time about the subscriber base and growth of Flipkart First. Worse, there was a news story on July 20th about Flipkart planning to launch a programme called “F-Assured”, as a replacement to Flipkart Advantage. The story suggested that the launch of F-Assured was also meant to “preempts the launch of Amazon Prime” — something that did not come to pass.

Unlike Prime, which Amazon founder Jeff Bezos called one of the three bold bets Amazon had made (“AWS, Marketplace and Prime are all examples of bold bets at Amazon that worked” — is what Bezos wrote in a letter to shareholders), Flipkart has let First become yet one of many initiatives it has launched and failed to pursue with any meaningful degree of commitment or focus.

AWS, Marketplace and Prime are all examples of bold bets at Amazon that worked, and we’re fortunate to have those three big pillars. They have helped us grow into a large company, and there are certain things that only large companies can do. [Jeff Bezos, letter to shareholder, 2016]

"Flipkart First" emailer, 2014Flipkart launched its e-book store, Flyte, in November 2012, almost a year before Amazon launched operations in India and more than a year before Amazon launched Kindle in India. Yet, Flipkart shuttered its e-book store in late 2015, citing e-books as not “a strategic fit.” The same was the story with its digital music business, which it started in 2012, and shuttered in June 2013.


I had written more than a year back that Flipkart seemed to be losing focus, that it needed to beware of Amazon, who I called “The Whispering Death” (an allusion to the great West Indian fast bowler, Michael Holding), and even suggested what it needed to do. Cloud computing was one advice.

Amazon India Prime Announcement on the upcoming launch of Video (July 26, 2016)Flipkart continues to lose in its battle against Amazon. It has suffered a steep erosion in its valuations, Amazon is gaining market share faster than Flipkart, and now even the battle of perceptions is being won by Amazon. Flipkart seems to have fallen into a predictable pattern of making a series of flashy announcements, and not following up to make any of them a success.


After publishing this, later that day came news that Flipkart-owned fashion e-tailer Myntra had agreed to buy Jabong in an all-cash deal for $70 million. Jabong had been valued at over $500 million in 2013.
The takeaways from this acquisition can be summed up as follows:
  • Flipkart seems to be in the thought mode that it can fight off Amazon by retreating to a few select niches like fashion e-tailing that will give it some sort of a competitive advantage over the US behemoth. 
  • This is in many ways a reaffirmation of Flipkart’s “retreat” mentality, since they have retreated from e-books and digital music already. Amazon, on the other hand, believes in getting into every single segment that a customer spends money on. Witness its move into the office supplies market in the US, or into the home services market (estimated at between $400 billion to $800 billion a year in the US). 
  • Valuations mean little or nothing for non-listed companies. Jabong was valued at over $500 million in 2013; it was seeking to sell itself to Amazon India for $1 billion in 2014, could not find buyers even at $100 million earlier in 2016, and finally sold itself for 7 percent of its asking price. Flipkart would do well to understand the implications of valuations for itself. Denialism is not a strategy. 
Flipkart has been in the news for all the wrong reasons — for its penchant of helicoptering executives from Silicon Valley at million-dollar salaries, for high-profile executives, for valuation missteps, and more. Missing in all this is a laser-like focus on execution. If it wants to learn anything from Amazon, it should be this.

Will Flipkart step up?

References:

I first published this post on Medium on Jul 26, 2016. This was later republished in Swarajya on July 28, 2016.

© 2017, Abhinav Agarwal. All rights reserved.

Create Jupyterhub Container on Centos 7 on Proxmox

Jeff Moss - Fri, 2017-03-03 18:01

These instructions show how to create a Centos 7 container on Proxmox running JupyterHub.

Note – the instructions are just a guide and for use on my environment – you may need/wish to adjust for your own environment as necessary.

Versions
root@billy:~# pveversion
pve-manager/4.4-12/e71b7a74 (running kernel: 4.4.40-1-pve)
The version of the notebook server is 4.4.1
Python 3.4.5 (default, Nov  9 2016, 16:24:59) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)]
Create Container
pct create 153 u01:vztmpl/centos-7-default_20160205_amd64.tar.xz -rootfs 10 -hostname jupyterhub -memory 2048 -nameserver 192.168.1.25 -searchdomain oramoss.com -net0 name=eth0,bridge=vmbr0,gw=192.168.1.1,ip=192.168.1.153/24 -swap 2048 -cpulimit 2 -storage u01
Installation
yum update -y
yum install epel-release -y
yum install wget -y
cd ~
wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u111-b14/jdk-8u111-linux-x64.rpm"
yum localinstall jdk-8u111-linux-x64.rpm -y
rm -f ~/jdk-8u111-linux-x64.rpm
vi /etc/environment
export JAVA_HOME=/usr/java/jdk1.8.0_111/jre
vi ~/.bash_profile
export PATH=${JAVA_HOME}/bin:$PATH
. ~/.bash_profile
java -version
yum install pdsh -y
yum -y install yum-cron
chkconfig yum-cron on
vi /etc/yum/yum-cron.conf
update_messages = no
apply_updates = yes
email_to = oramoss.jeffmoss@gmail.com
email_host = smtp.gmail.com
service yum-cron start
cd /etc/yum.repos.d
wget http://yum.oracle.com/public-yum-ol7.repo
yum install python34 -y
curl -O https://bootstrap.pypa.io/get-pip.py
/usr/bin/python3.4 get-pip.py
yum install npm nodejs-legacy -y --nogpgcheck
yum install anaconda -y --nogpgcheck
python3 -m pip install jupyterhub
npm config set strict-ssl false
npm install -g configurable-http-proxy
python3 -m pip install notebook
wget http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol7 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
wget ftp://ftp.icm.edu.pl/vol/rzm5/linux-slc/centos/7.0.1406/cernonly/x86_64/Packages/oracle-instantclient12.1-basic-12.1.0.2.0-1.x86_64.rpm
wget ftp://bo.mirror.garr.it/1/slc/centos/7.1.1503/cernonly/x86_64/Packages/oracle-instantclient12.1-sqlplus-12.1.0.2.0-1.x86_64.rpm
wget ftp://bo.mirror.garr.it/1/slc/centos/7.1.1503/cernonly/x86_64/Packages/oracle-instantclient12.1-devel-12.1.0.2.0-1.x86_64.rpm
yum install oracle-instantclient12.1-basic-12.1.0.2.0-1.x86_64.rpm -y
yum install oracle-instantclient12.1-devel-12.1.0.2.0-1.x86_64.rpm -y
yum install oracle-instantclient12.1-sqlplus-12.1.0.2.0-1.x86_64.rpm -y
vi ~/.bash_profile
export ORACLE_HOME=/usr/lib/oracle/12.1/client64
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
vi /etc/environment
export ORACLE_HOME=/usr/lib/oracle/12.1/client64
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
. ~/.bash_profile
yum install gcc -y
yum install python-devel -y
yum install python34-devel -y
pip install cx_Oracle
pip install ipython-sql
jupyterhub --generate-config
vi /root/jupyterhub_config.py # ensure the following are set:
c.Spawner.env_keep = ['LD_LIBRARY_PATH']
c.Spawner.environment = dict(LD_LIBRARY_PATH='/usr/lib/oracle/12.1/client64/lib:$LD_LIBRARY_PATH')
systemctl stop firewalld
systemctl disable firewalld
vi /lib/systemd/system/jupyterhub.service
[Unit]
Description=Jupyterhub
After=network-online.target
[Service]
User=root
ExecStart=/usr/bin/jupyterhub --ip=192.168.1.10
WorkingDirectory=/root
[Install]
WantedBy=multi-user.target
systemctl enable jupyterhub
systemctl start jupyterhub
systemctl status jupyterhub

That should be it…navigate to http://192.168.1.153:8000 and login with a unix user on that node.

Speaking at the next SQL Nexus at Copenhagen 2017

Yann Neuhaus - Fri, 2017-03-03 06:40

On May 2nd, I will have the chance to speak during the next SQL Nexus event in Copenhagen (1 -3 may) about SQL Server 2016 and availability groups and if I have enough time, you will see what is coming with the SQL Server vNext.

SQL_Nexus2017_1200x627px_linkedIn_post_template_test

This is also a good opportunity to attend to other sessions held by well-known people in the industry like David Klee, Edwin M Sarmiento, Wolfgang Strasser and Uwe Ricken  to name a few ones ..

I’m looking forward to share and learn with the SQL Server community.

Hope to see you there!

 

Cet article Speaking at the next SQL Nexus at Copenhagen 2017 est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator