Feed aggregator

composite hash - list partitioning

Tom Kyte - Mon, 2017-08-07 12:46
Tom, does Oracle support hash partitioning and range or list partitioning on each hash partition. i.e hash-range composite partition and hash-list composite partition ? "vldb and partitioning manual" does not list this combination - wasnt sure ...
Categories: DBA Blogs

Identifying whether an entry is generated "by default" in a trigger

Tom Kyte - Mon, 2017-08-07 12:46
Dear Tom, When having a table with an autogenerated ID like this: <code>create table ids ( id number generated by default as identity, t varchar2(300));</code> And creating a trigger like this: <code>create or replace trigger ids_trig be...
Categories: DBA Blogs

How to export 44 lakhs of records from a table in oracle?

Tom Kyte - Mon, 2017-08-07 12:46
How to export 44 lakhs of records from a table in oracle?
Categories: DBA Blogs

Rebuilding Indexes

Tom Kyte - Mon, 2017-08-07 12:46
I know this question has been asked earlier and I am sorry to take up this question slot but I am confused regarding rebuilding indexes. If I am interpreting it correctly, you don't recommend rebuilding indexes at all. I have talked to two se...
Categories: DBA Blogs

DBA_DEPENDENCY_COLUMNS

Tom Kyte - Mon, 2017-08-07 12:46
Tom, Happy New Year. Thanks for all your many contributions to the Oracle community. I want to find out column dependencies on views. Basically if I have a view MYVIEW with columns A,B,C,D I'd like to write a query to show the source table/colu...
Categories: DBA Blogs

Synthesize rows based on column values

Tom Kyte - Mon, 2017-08-07 12:46
I have one database table test.The structure of the table is: Col1(varchar) Col2(number) The table has 2 rows: Abc 5 Def 6 I desire the output to be: Col1 Abc Abc Abc Abc Abc Def Def Def Def Def Def I need to write a single q...
Categories: DBA Blogs

How to load repetitive similar kind of structure of data format from a plain text file to DB tables?

Tom Kyte - Mon, 2017-08-07 12:46
Hi Oracle Masters I have been assigned a requirement to load data from a text file say myreport.txt to the Oracle Tables. text file contains data set of marks of every subject for students. file myreport.txt -------- 10th standard results de...
Categories: DBA Blogs

New Oracle Security On-Line Training Dates Added

Pete Finnigan - Mon, 2017-08-07 12:46
We have finally added new on-line training dates for some of our classes; the very popular two days "How to perform a security audit of an Oracle Database" is first followed by the one day class "Hardening and Securing Oracle....[Read More]

Posted by Pete On 07/08/17 At 06:30 PM

Categories: Security Blogs

How vendor support can help improve ongoing IT operations

Chris Warticki - Mon, 2017-08-07 11:45

Author:
Elaina Stergiades, Research Manager, Software and Hardware Support Services, IDC

The previous discussions (Part 1 & Part 2) focused on how to manage IT problems: either solving IT problems when they occur (reactive support), or preventing IT issues from affecting critical business processes (predictive/preventive support).  There’s no question that reactive support and preventive/predictive support for critical IT systems will remain an important function of hardware and software support going forward.  However, vendor-driven support now typically includes an additional IT service capability that can provide key insight and guidance to CIOs and IT managers.  As business leaders look for more advanced technology solutions that can help improve the customer experience and drive revenue, flexibility and agility in IT service delivery are no longer optional.  As a result, IT organizations are looking to support providers for assurance in helping improve IT operations across their integrated, heterogeneous environments.

As more enterprises look to modernize their IT systems by implementing mobile, social and cloud solutions, IT processes are shifting away from supporting specific technologies to directly supporting business processes.  This is a complex shift for most IT organizations, with far-reaching implications for how support is purchased, delivered and consumed.  Hardware and software support providers are increasingly asked to go beyond reactive support and preventive/predictive support for specific technologies.  CIOs and IT managers are looking for help optimizing operations across the IT landscape, and delivering on the original promise of these systems.  Increasingly, that means considering support providers that can assure a seamless and comprehensive experience across their IT stack.

At IDC, our research shows that hardware and software support providers now include non-traditional support capabilities as part of support offerings.  These services are largely intended to help optimize IT operations, but many are even structured to help with software adoption and utilization across the business.  IDC believes the rapid adoption of cloud technologies is fueling this transformation, as CIOs look to “get what they paid for” from the IT providers – regardless of the deployment. 

With a deeper understanding of the technology itself, and direct visibility into the customer environment, the original hardware and software vendors can offer a comprehensive mix of these non-traditional support capabilities for resource-strapped IT organizations.  Some of these tools require direct access to the underlying technologies, which may only be available from the original technology vendor. IDC recommends considering support providers with a portfolio of services tailored for optimizing IT operations, including:

Planning for migrations and new technology deployments, with deep understanding of the technology under consideration, the current IT landscape and proposed customer roadmap

Fast and efficient contract management, especially when IT assets must be scaled up, scaled down or reallocated quickly to accommodate changing business requirements

Expanded training capabilities to help speed software adoption and utilization

Peer-to-peer best practice sharing, including industry benchmarking

Replacing day to day mundane IT operations with automated solutions, so CIOs and IT managers can focus on innovations that directly affect the bottom line

IDC recommends considering support from hardware and software vendors with these support capabilities, going beyond break-fix and problem avoidance to assuring a full range of comprehensive services that can help optimize ongoing IT operations.

Elaina Stergiades is the Research Manager for IDC's Software Support Services program. In this position, she provides insight and analysis of industry trends and market strategies for software vendors supporting applications, development environment and systems software. Elaina is also responsible for research, writing and program development of the software support services market.

Prior to joining IDC, Elaina spent 10 years in the software and web design industries. As a quality assurance engineer at Parametric Technology and Weather Services International (WSI), she led testing efforts for new applications and worked closely with customers to design and implement new functionality. Elaina also worked in product marketing at WSI, directing an initiative to launch a new weather crawl system. More recently, she was a project manager at Catalyst online. At Catalyst, Elaina was responsible for managing client search marketing campaigns targeting increased website traffic, revenue and top search engine rankings.

Elaina has a B.S. in mechanical engineering from Cornell University and an M.B.A. from Babson College.

24 HOP French edition 2017 – Session videos are available

Yann Neuhaus - Mon, 2017-08-07 10:09

The 2nd edition of 24HOP French Edition 2017 is over and we had great sessions about SQL Server and various topics (
SQL Server 2017 new features, Azure, PowerBI, High Availability , Linux, Hyper-convergence , Modeling …)

24hopsqlinuxha

If you did not attend to this event, you now have the opportunity to watch the videos of the different sessions. From my side, I had the chance to present SQL Server and High Availability on Linux.

Hope to see you next time!

 

Cet article 24 HOP French edition 2017 – Session videos are available est apparu en premier sur Blog dbi services.

Words I Don’t Use, Part 4: “Expert”

Cary Millsap - Mon, 2017-08-07 09:55
The fourth “word I do not use” is expert.

When I was a young boy, my dad would sometimes drive me to school. It was 17 miles of country roads and two-lane highways, so it gave us time to talk.

At least once a year, and always on the first day of school, he would tell me, “Son, there are two answers to every test question. There’s the correct answer, and there’s the answer that the teacher expects. ...They’re not always the same.”

He would continue, “And I expect you to know them both.”

He wanted me to make perfect grades, but he expected me to understand my responsibility to know the difference between authority and truth. My dad thus taught me from a young age to be skeptical of experts.

The word expert always warns me of a potentially dangerous type of thinking. The word is used to confer authority upon the person it describes. But it’s ideas that are right or wrong; not people. You should evaluate an idea on its own merit, not on the merits of the person who conveys it. For every expert, there is an equal and opposite expert; but for every fact, there is not necessarily an equal and opposite fact.

A big problem with expert is corruption—when self-congratulators hijack the label to confer authority upon themselves. But of course, misusing the word erodes the word. After too much abuse within a community, expert makes sense only with finger quotes. It becomes a word that critical thinkers use only ironically, to describe people they want to avoid.

MariaDB – Speed up your logical MariaDB backups with mydumper

Yann Neuhaus - Mon, 2017-08-07 07:08

Per default, MariaDB is shipped with a utility called mysqldump for logical backups. For more information, please take a look at the following link.

https://mariadb.com/kb/en/mariadb/mysqldump/

The mysqldump has advantages, e.g. it is easy to use and it is shipped with the standard MariaDB installation.  So, no additional installation is needed. However, it has also some disadvantages. E.g. it is single threaded and it is  writing to one big file, even with the latest version which is MariaDB 10.2.7 at the moment.

In case you want to dump out your data very quickly this can be your bottleneck. This is where the mydumper comes into play. The main feature of mydumper is that you can parallelize it. The mydumper utility uses 4 parallel threads per default if not otherwise specified.

./mydumper --help | grep threads
  -t, --threads               Number of threads to use, default 4

Another cool feature is compression.

./mydumper --help | grep compress
  -c, --compress              Compress output files

The biggest disadvantage is that mydumper is not delivered out of the box. You have to compile it yourself. To do so, simply follow the following steps:

Install the packages, which are needed for the mydumper compilation

# yum install gcc gcc-c++ glib2-devel mysql-devel zlib-devel \
  pcre-devel openssl-devel cmake

Unzip and compile mydumper

$ unzip mydumper-master.zip
mysql@mysql01:/u00/app/mysql/product/tools/ [mysqld1] unzip mydumper-master.zip
Archive:  mydumper-master.zip
e643528321f51e21a463156fbf232448054b955d
   creating: mydumper-master/
  inflating: mydumper-master/.bzrignore
  inflating: mydumper-master/CMakeLists.txt
  inflating: mydumper-master/README
  inflating: mydumper-master/binlog.c
  inflating: mydumper-master/binlog.h
   creating: mydumper-master/cmake/
   creating: mydumper-master/cmake/modules/
  inflating: mydumper-master/cmake/modules/CppcheckTargets.cmake
  inflating: mydumper-master/cmake/modules/FindGLIB2.cmake
  inflating: mydumper-master/cmake/modules/FindMySQL.cmake
  inflating: mydumper-master/cmake/modules/FindPCRE.cmake
  inflating: mydumper-master/cmake/modules/FindSphinx.cmake
  inflating: mydumper-master/cmake/modules/Findcppcheck.cmake
  inflating: mydumper-master/cmake/modules/Findcppcheck.cpp
  inflating: mydumper-master/common.h
  inflating: mydumper-master/config.h.in
   creating: mydumper-master/docs/
  inflating: mydumper-master/docs/CMakeLists.txt
   creating: mydumper-master/docs/_build/
  inflating: mydumper-master/docs/_build/conf.py.in
  inflating: mydumper-master/docs/_build/sources.cmake.in
  inflating: mydumper-master/docs/authors.rst
  inflating: mydumper-master/docs/compiling.rst
  inflating: mydumper-master/docs/examples.rst
  inflating: mydumper-master/docs/files.rst
  inflating: mydumper-master/docs/index.rst
  inflating: mydumper-master/docs/mydumper_usage.rst
  inflating: mydumper-master/docs/myloader_usage.rst
  inflating: mydumper-master/g_unix_signal.c
  inflating: mydumper-master/g_unix_signal.h
  inflating: mydumper-master/mydumper.c
  inflating: mydumper-master/mydumper.h
  inflating: mydumper-master/myloader.c
  inflating: mydumper-master/myloader.h
  inflating: mydumper-master/server_detect.c
  inflating: mydumper-master/server_detect.h

mysql@mysql01:/u00/app/mysql/product/tools/ [mysqld1] mv mydumper-master mydumper-0.9.2
mysql@mysql01:/u00/app/mysql/product/tools/ [mysqld1] cd mydumper-0.9.2
mysql@mysql01:/u00/app/mysql/product/tools/mydumper-0.9.2/ [mysqld1] cmake . -DCMAKE_INSTALL_PREFIX=/u00/app/mysql/product/tools/mydumper-0.9.2
-- The C compiler identification is GNU 4.8.5
-- The CXX compiler identification is GNU 4.8.5
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Using mysql-config: /u00/app/mysql/product/mysql-5.6.37/bin/mysql_config
-- Found MySQL: /u00/app/mysql/product/mysql-5.6.37/include, /u00/app/mysql/product/mysql-5.6.37/lib/libmysqlclient.so;/usr/lib64/libpthread.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libdl.so
-- Found ZLIB: /usr/lib64/libz.so (found version "1.2.7")
-- Found PkgConfig: /usr/bin/pkg-config (found version "0.27.1")
-- checking for one of the modules 'glib-2.0'
-- checking for one of the modules 'gthread-2.0'
-- checking for module 'libpcre'
--   found libpcre, version 8.32
-- Found PCRE: /usr/include
1
-- ------------------------------------------------
-- MYSQL_CONFIG = /u00/app/mysql/product/mysql-5.6.37/bin/mysql_config
-- CMAKE_INSTALL_PREFIX = /u00/app/mysql/product/tools/mydumper-0.9.2
-- BUILD_DOCS = ON
-- WITH_BINLOG = OFF
-- RUN_CPPCHECK = OFF
-- Change a values with: cmake -D<Variable>=<Value>
-- ------------------------------------------------
--
-- Configuring done
-- Generating done
-- Build files have been written to: /u00/app/mysql/product/tools/mydumper-0.9.2

HINT: In case you don’t have Sphinx installed, you can use the -DBUILD_DOCS=OFF option. Sphinx is a documentation generator. For more information see http://sphinx-doc.org/

mysql@mysql01:/u00/app/mysql/product/tools/mydumper-0.9.2/ [mysqld1] make
Scanning dependencies of target mydumper
[ 16%] Building C object CMakeFiles/mydumper.dir/mydumper.c.o
[ 33%] Building C object CMakeFiles/mydumper.dir/server_detect.c.o
[ 50%] Building C object CMakeFiles/mydumper.dir/g_unix_signal.c.o
Linking C executable mydumper
[ 50%] Built target mydumper
Scanning dependencies of target myloader
[ 66%] Building C object CMakeFiles/myloader.dir/myloader.c.o
Linking C executable myloader
[ 66%] Built target myloader
Scanning dependencies of target doc_sources
[ 66%] Built target doc_sources
Scanning dependencies of target doc_html
[ 83%] Building HTML documentation with Sphinx
/u00/app/mysql/product/tools/mydumper-0.9.2/docs/_sources/files.rst:39: WARNING: unknown option: mydumper --schemas
WARNING: html_static_path entry '/u00/app/mysql/product/tools/mydumper-0.9.2/docs/_static' does not exist
[ 83%] Built target doc_html
Scanning dependencies of target doc_man
[100%] Building manual page with Sphinx
[100%] Built target doc_man

mysql@mysql01:/u00/app/mysql/product/tools/mydumper-0.9.2/ [mysqld1] make install
[ 50%] Built target mydumper
[ 66%] Built target myloader
[ 66%] Built target doc_sources
[ 83%] Building HTML documentation with Sphinx
[ 83%] Built target doc_html
[100%] Building manual page with Sphinx
[100%] Built target doc_man
Install the project...
-- Install configuration: ""
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/bin/mydumper
-- Removed runtime path from "/u00/app/mysql/product/tools/mydumper-0.9.2/bin/mydumper"
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/bin/myloader
-- Removed runtime path from "/u00/app/mysql/product/tools/mydumper-0.9.2/bin/myloader"
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/authors.rst
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/compiling.rst
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/examples.rst
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/files.rst
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/index.rst
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/mydumper_usage.rst
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/myloader_usage.rst
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/authors.html
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_sources
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_sources/authors.txt
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_sources/compiling.txt
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_sources/examples.txt
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_sources/files.txt
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_sources/index.txt
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_sources/mydumper_usage.txt
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_sources/myloader_usage.txt
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/compiling.html
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/examples.html
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/files.html
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/index.html
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/mydumper_usage.html
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/myloader_usage.html
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/genindex.html
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/search.html
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/pygments.css
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/ajax-loader.gif
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/basic.css
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/comment-bright.png
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/comment-close.png
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/comment.png
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/doctools.js
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/down-pressed.png
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/down.png
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/file.png
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/jquery-1.11.1.js
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/jquery.js
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/minus.png
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/plus.png
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/searchtools.js
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/underscore-1.3.1.js
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/underscore.js
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/up-pressed.png
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/up.png
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/websupport.js
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/classic.css
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/sidebar.js
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/_static/default.css
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/.buildinfo
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/searchindex.js
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/doc/mydumper/html/objects.inv
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/man/man1/mydumper.1
-- Installing: /u00/app/mysql/product/tools/mydumper-0.9.2/share/man/man1/myloader.1
mysql@mysql01:/u00/app/mysql/product/tools/mydumper-0.9.2/ [mysqld1]

mysql@mysql01:/u00/app/mysql/product/tools/ [mysqld1] ln -s mydumper-0.9.2 mydumper
mysql@mysql01:/u00/app/mysql/product/tools/ [mysqld1]

If compiled correctly, you will see two new binaries created. The mydumper and the myloader.

mysql@mysql01:/u00/app/mysql/product/tools/mydumper/bin/ [mysqld1] ls -l
total 280
-rwxr-xr-x 1 mysql mysql 218808 Aug  7 07:25 mydumper
-rwxr-xr-x 1 mysql mysql  63448 Aug  7 07:25 myloader

And besides that, you will have the documentation compiled as html in the ../mydumper-0.9.2/share/doc/mydumper/html folder.

MyDumper HTML

Ok. Let’s see now mysqldump vs. mydumper in action. My sample database is about 10G in size. Of course, the bigger the database is, the bigger the performance impact of mydumper will be.

First, we dump out all databases with mysqldump (without and with compression) and record the time.

-- no compression 

mysql@mysql01:/u00/app/mysql/ [mysqld1] mysqldump --version 
mysqldump  Ver 10.16 Distrib 10.2.7-MariaDB, for Linux (x86_64) 

mysql@mysql01:/u00/app/mysql/ [mysqld1] time mysqldump --defaults-file=/u00/app/mysql/admin/mysqld1/.my.cnf --single-transaction --all-databases > mysqldump.sql 

real    3m38.94s 
user    1m29.11s 
sys     0m11.85s 

mysql@mysql01:/u00/app/mysql/ [mysqld1] ls -lh mysqldump.sql     
-rw-r--r-- 1 mysql mysql 10G Aug  7 11:33 mysqldump.sql 

-- compression 

mysql@mysql01:/u00/app/mysql/ [mysqld1] time mysqldump --defaults-file=/u00/app/mysql/admin/mysqld1/.my.cnf --single-transaction --all-databases | gzip > mysqldump.sql.gz 

real    4m43.75s 
user    4m55.25s 
sys     0m10.65s 

mysql@mysql01:/u00/app/mysql/ [mysqld1] ls -lh mysqldump.sql.gz 
-rw-r--r-- 1 mysql mysql 3.1G Aug  7 11:55 mysqldump.sql.gz

The uncompressed dump took about 3.39 Minute (10G) and the compressed one about 4.44 Minute (3.1G).

Now we repeat it with mydumper.

-- no compression 

mysql@mysql01:/u00/app/mysql/ [mysqld1] time /u00/app/mysql/product/tools/mydumper/bin/mydumper --defaults-file=/u00/app/mysql/admin/mysqld1/.my.cnf --trx-consistency-only --threads=6 --outputdir=/mydump/mysqld1/mydumper_mysqld1 

real    1m22.44s 
user    0m41.17s 
sys     0m7.31s 

mysql@mysql01:/u00/app/mysql/ [mysqld1] du -hs /mydump/mysqld1/mydumper_mysqld1/ 
10G     mydumper_mysqld1/ 

-- compression 

mysql@mysql01:/u00/app/mysql/ [mysqld1] time /u00/app/mysql/product/tools/mydumper/bin/mydumper --defaults-file=/u00/app/mysql/admin/mysqld1/.my.cnf --trx-consistency-only --threads=6 --compress --outputdir=/mydump/mysqld1/mydumper_mysqld1 

real    3m4.99s 
user    3m54.94s 
sys     0m5.11s 

mysql@mysql01:/u00/app/mysql/ [mysqld1] du -hs /mydump/mysqld1/mydumper_mysqld1/ 
3.1G    mydumper_mysqld1/

With mydumper, the uncompressed dump took about 1.23 Minute (10G) and the compressed one about 3.04 Minute (3.1G).

As you can see in the results, the uncompressed dump was about 3 times faster with mydumper. The compressed mydumper export only about 30% faster. The reason for the compressed export being only 30% faster might be due to the fact that I have only 2 virtual cpu’s assigned to my VM.

Conclusion

MyDumper is a great tool that can speed up your database exports quite dramatically. Take a look at it. It might be worth it.

 

 

 

Cet article MariaDB – Speed up your logical MariaDB backups with mydumper est apparu en premier sur Blog dbi services.

Oracle Named a Leader in the 2017 Gartner Magic Quadrant for Web Content Management

Oracle Press Releases - Mon, 2017-08-07 07:00
Press Release
Oracle Named a Leader in the 2017 Gartner Magic Quadrant for Web Content Management Oracle positioned as a leader based on completeness of vision and ability to execute

Redwood Shores, Calif.—Aug 7, 2017

Oracle today announced that it has been named a leader in Gartner’s 2017 “Magic Quadrant for Web Content Management*” report. Oracle believes this placement is another proof point of momentum for Oracle’s hybrid cloud strategy with Oracle WebCenter Sites and growth for Oracle Content and Experience Cloud, part of the Oracle Cloud Platform.

“We believe this placement is further validation of Oracle’s continued momentum in the content as a service space and larger PaaS and SaaS market,” said Amit Zavery, senior vice president, product development, Oracle Cloud Platform. “Without proper tools, organizations cannot manage all types of content in a meaningful way. Not only does our solution put content in the hands of its owners, but it also offers the versatility and comprehensiveness to support a broad range of initiatives.”

According to Gartner, “Leaders should drive market transformation. Leaders have the highest combined scores for Ability to Execute and Completeness of Vision. They are doing well and are prepared for the future with a clear vision and a thorough appreciation of the broader context of digital business. They have strong channel partners, a presence in multiple regions, consistent financial performance, broad platform support and good customer support. In addition, they dominate in one or more technologies or vertical markets. Leaders are aware of the ecosystem in which their offerings need to fit.”

Oracle’s capabilities extend beyond the typical role of content management. Oracle provides low-code development tools for building digital experiences that exploit a service catalog of data connections. Oracle Content and Experience Cloud enables organizations to manage and deliver content to any digital channel to drive effective engagement with customers, partners, and employees. With Oracle Content and Experience Cloud, organizations can enable content collaboration, deliver consistent omni-channel experience with one central content hub.

Download Gartner’s 2017 “Magic Quadrant for Web Content Management” here.

Oracle WebCenter Sites and Oracle Content and Experience Cloud enable organizations to build rich digital experiences with centralized content management, providing a unified repository to house unstructured content, enabling organizations to deliver content in the proper format to customers, employees and partners, within the context of familiar applications that fit the way they work.

* Gartner, “Magic Quadrant for Web Content Management,” Mick MacComascaigh, Jim Murphy, July 2017

Contact Info
Kristin Reeves
Blanc & Otus
+1.415.856.5145
+1.415.856.5145
Sarah Fraser
Oracle
+1.650.743.0660
sarah.fraser@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Talk to a Press Contact

Kristin Reeves

  • +1.415.856.5145

Sarah Fraser

  • +1.650.743.0660

Create user stories and supporting ERD

Dimitri Gielis - Mon, 2017-08-07 02:19
This post is part of a series of posts: From idea to app or how I do an Oracle APEX project anno 2017

In the first post we defined our high level idea of our application. Now what are the real requirements? what does the app has to do? In an Agile software development approach we typically create user stories to define that. We write sentences in the form of:
As a < type of user >, I want < some goal > so that < some reason >Goal of defining user storiesThe only relevant reason to write user stories is to have a discussion with the people you're building the application for. To developers those stories give an overview of what is expected before the development is started in a language that all parties understand.

Some people might like to write a big requirements document, but for me personally I just don't like that (neither to read or to write). I really want to speak to the people I'm going to build something for, to get in their body and skin and really understand and feel their pain. If somebody gives me a very detailed requirements document, they don't give me much freedom. Most people don't even know what is technically possible or what could really help them.


I like this quote of Henry Ford which illustrates the above:


If I had asked people what they wanted, they would have said faster horses.






Now having said that, you have to be really careful with my statement above... it really depends the developer if you can give them freedom or not. I know many developers who are the complete opposite of me and just want you to tell them exactly what and how to build. Same applies to people who have a bright idea, they do know what would help them. I guess it comes down to, use each others strength and be open and supportive during the communication.
User stories for our projectIn our multiplication table project we will write user stories for three different types of users: the player (child), the supervisor (parent/teacher) and the administrator of the app.
  • As a player, I want to start a session so that I can practice
  • As a player, I want to practice multiplications so that I get better at multiplying
  • As a player, I want to see how I did so that I know if I improved, stayed the same or did worse
  • As a player, I want to compare myself to other people so that I get a feeling of my level
  • As a supervisor, I want to register players so that they can practice
  • As a supervisor, I want to start the session so that player can practice
  • As a supervisor, I want to choose the difficulty level so that the player gets only exercises he's supposed to know
  • As a supervisor, I want to get an overview of the players progress and achievements
  • As a supervisor, I want to get an overview of the players mistakes
  • As a supervisor, I want to print a certificate so the player feels proud of it's achievement
  • As an administrator, I want to see the people who registered for the app so that I have an overview how much the app is used
  • As an administrator, I want to add, update and remove users so that the application can be maintained
  • As an administrator, I want to see statistics of the site so that I know if it's being used
The above is not meant to be a static list, in contrary, whenever we think of something else, we will come back to the list and add more sentences. So far I took the role as administrator and parent, my son as child and my father as teacher to come to this list. I welcome more peoples ideas, so feel free to add more user stories in the comment field of things you think of. More info on user stories and how to write them you find here.
More on Agile, Scrum, Kanban, XPBefore we move on with what I typically do after having discussed the requirements with the people, I want to touch on some buzz-words. I guess most companies claim they do Agile software development. Most popular Agile software development frameworks are Scrum, Kanban and XP. I'm by far an expert in any of those, but for me it all comes down to make the team more efficient to deliver what is really needed.

My company and I are not following any of those frameworks to the letter, instead we use a mix of all. We have a place where we note all the things we have to do (backlog), we developed iteratively and ship versions frequently (sprints), we have coding standards, we limit the work in progress (WIP) etc.

When we are doing consulting or development we adapt to how the customer likes to work. It also depends a bit the size of the project and team that is in place.

So my advice is, do whatever works best for you and your team. The only important thing at the end of the day is that you deliver (in time, on budget and what is needed) :)
Thinking in relational modelsSo when I really understand the problem, my mind starts to think in an entity relational diagram or in short ERD. I don't use any tool just yet, a few pieces of paper is all I need. I start writing down words, draw circles and relations, in fact those will become my tables, columns and foreign keys later on. For me personally drawing an ERD really helps me moving to the next step of seeing what data I will have and how I should structure it. I read the user stories one by one and see if I have a table for the data to build the story. I write down the ideas, comments and questions that pop-up and put it on a cleaner piece of paper. This paper is again food for discussion with the end-users.

Here are the papers for the multiplication table project:


Our ERD is not that complicated I would say; we basically need a table to store the users who will connect to the site/app. I believe in first instance it will most likely be parents or teachers who are interested in this app. Every user has the "user" role, but some will have the administrator role, so the app can be managed. We could also use a flag in the user table to specify who's an admin, but I like to have a separate table for roles as it's more flexible, for example if we wanted to make a difference between a teacher and parent in the future. Once you are in the app you create some players, most likely your children. Those players will play games and every game consists out of some details, for example which multiplication they did.

While reading the user stories, we also want some rankings. In the above ERD I could create the players own ranking, or the ranking of the players of a user (supervisor), but it's not that flexible. That is why I added the concept of teams. A player can belong to one or more teams, so I could create a specific team where my son and I belong too, so we can see each others rank in that team, but I can also create a team of friends. The team concept makes it even flexible for teachers, so they can create their classes and add players to a specific class.

I also added a note that instead of a custom username/password, it might be interesting to add a social login like Facebook, just so the app is even easier to be accessed. As I know in Oracle APEX 5.2 social authentication will be included, I will hold off to build it myself for now, but plan to upgrade our authentication scheme once Oracle APEX 5.2 comes out.

So my revised version of the ERD looks like this:


I hope this already gives some insight in the first steps I do when starting a project.

In the above post I didn't really go into the tools to support the Agile software development (as I didn't use it yet), that is for another post.

If you have questions, comments or want to share your thoughts, don't hesitate to put a comment to this post.
Categories: Development

Updated Whats New whitepaper - 4.3.0.4.0

Anthony Shorten - Sun, 2017-08-06 17:40

The Whats New in FW4 whitepaper has been updated for the latest service pack release. This whitepaper is designed to summarize the major technology and functional changes implemented in the Oracle Utilities Application Framework since V2.2 till the latest service pack. This is primarily of interest to customer upgrading of those earlier versions to understand what has changed and what is new in the framework since that early release.

The whitepaper is only a summary of selected enhancements and it is still recommended to review the release notes of each release if you are interested in details of everything that is changed. This whitepaper does not cover the changes to any of the products that use the Oracle Utilities Application Framework, it is recommended to refer to the release notes of the individual products for details of new functionality.

The whitepaper is available from Whats New in FW4 (Doc Id: 1177265.1) from My Oracle Support.

Inserting data into a table

Tom Kyte - Sun, 2017-08-06 00:06
Hello, I am trying to insert data into a table, The only thing is it is of 20 years. I have already created a query. The query is in a good shape but the only thing missing in my query is the dates. Below is my query. I want LV_START_DATE as 201...
Categories: DBA Blogs

orcl

Tom Kyte - Sun, 2017-08-06 00:06
I have table employee...which has two columns like....Name and Id... create table employee(name varchar2(10),id number); Insert into employee values('A',1); Insert into employee values('B',2); Insert into employee values('C',3); Name...
Categories: DBA Blogs

Postgres vs. Oracle access paths IV – Order By and Index

Yann Neuhaus - Sat, 2017-08-05 15:00

I realize that I’m talking about indexes in Oracle and Postgres, and didn’t mention yet the best website you can find about indexes, with concepts and examples for all RDBMS: http://use-the-index-luke.com. You will probably learn a lot about SQL design. Now let’s continue on execution plans with indexes.

As we have seen two posts ago, an index can be used even with a 100% selectivity (all rows), when we don’t filter any rows. Oracle has INDEX FAST FULL SCAN which is the fastest, reading blocks sequentially as they come. But this doesn’t follow the B*Tree leaves chain and does not return the rows in the order of the index. However, there is also the possibility to read the leaf blocks in the index order, with INDEX FULL SCAN and random reads instead of multiblock reads.
It is similar to the Index Only Scan of Postgres except that there is no need to get to the table to filter out uncommitted changes. Oracle reads the transaction table to get the visibility information, and goes to undo records if needed.

The previous post had a query with a ‘where n is not null’ predicate to be sure having all index entries in Oracle indexes and we will continue on this by adding an order by.

For this post, I’ve increased the size of the column N in the Oracle table, by adding 1/3 to each number. I did this for this post only, and for the Oracle table only. The index on N is now 45 blocks instead of 20. The reason is to show what happens when the cost of ‘order by’ is high. I didn’t change the Postgres table because there is only one way to scan the index, where result is always sorted.

Oracle Index Fast Full Scan vs. Index Full Scan


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID dbck3rgnqbakg, child number 0
-------------------------------------
select /*+ */ n from demo1 where n is not null order by n
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 46 (100)| 10000 |00:00:00.01 | 48 |
| 1 | INDEX FULL SCAN | DEMO1_N | 1 | 10000 | 46 (0)| 10000 |00:00:00.01 | 48 |
---------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "N"[NUMBER,22]

Index Full Scan, the random read version of index read is chosen here by the Oracle optimizer because we want the result on the column N and the index can provide this without additional sorting.

We can force the optimizer to do multiblock reads, with INDEX_FFS hint:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID anqfbf5caat2a, child number 0
-------------------------------------
select /*+ index_ffs(demo1) */ n from demo1 where n is not null order
by n
-----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 82 (100)| 10000 |00:00:00.01 | 51 | | | |
| 1 | SORT ORDER BY | | 1 | 10000 | 82 (2)| 10000 |00:00:00.01 | 51 | 478K| 448K| 424K (0)|
| 2 | INDEX FAST FULL SCAN| DEMO1_N | 1 | 10000 | 14 (0)| 10000 |00:00:00.01 | 51 | | | |
-----------------------------------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=1) "N"[NUMBER,22] 2 - "N"[NUMBER,22]

The estimated cost is higher: the index read is cheaper (cost=14 instead of 46) but then the sort operation brings this to 82. We can see additional columns in the execution plan here because the sorting operation needs a workarea in memory (estimated 478K, actually 424K used during the execution). Note that the multiblock read has a few blocks of overhead (reads 51 blocks instead of 48) because it has to read the segment header to identify the extents to scan.

Postgres Index Only Scan

In PostgreSQL there’s only one way to scan indexes: random reads by following the chain of leaf blocks. This returns the rows in the order of the index and does not require an additional sort:


explain (analyze,verbose,costs,buffers) select n from demo1 where n is not null order by n ;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------
Index Only Scan using demo1_n on public.demo1 (cost=0.29..295.29 rows=10000 width=4) (actual time=0.125..1.277 rows=10000 loops=1)
Output: n
Index Cond: (demo1.n IS NOT NULL)
Heap Fetches: 0
Buffers: shared hit=30
Planning time: 0.532 ms
Execution time: 1.852 ms

In the previous posts, we have seen a cost of cost=0.29..270.29 for the Index Only Scan. Here we have an additional cost of 25 for the cpu_operator_cost because I’ve added the ‘where n is not null’. As the default constant is 0.0025 this is the query planner estimating to evaluate it for 10000 rows.

First Rows

The Postgres cost always shows two values. The first one is the startup cost: the cost just before being able to return the first row. Some operations have a very small startup cost, others have some blocking operations that must finish before sending their first result rows. Here, as we have no sort operation, the first row retrieved from the index can be returned immediately and the startup cost is small: 0.29
In Oracle you can see the initial cost by optimizing the plan to retrieve the first row, with the FIRST_ROWS() hint:


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0fjk9vv4g1q1w, child number 0
-------------------------------------
select /*+ first_rows(1) */ n from demo1 where n is not null order by
n
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 (100)| 10000 |00:00:00.01 | 48 |
| 1 | INDEX FULL SCAN | DEMO1_N | 1 | 10000 | 2 (0)| 10000 |00:00:00.01 | 48 |
---------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "N"[NUMBER,22]

The actual number of blocks read (48) is the same as before because I finally fetched all rows, but the cost is small because it was estimated for two rows only. Of course, we can also tell Postgres or Oracle that we want only the first rows. This is for the next post.

Character strings

The previous example is an easy one because the column N is a number and both Oracle and Postgres stores number in a binary format that follows the same order as the numbers. But that’s different with character strings. If you are not in America, there is a very little chance that the order you want to see follows the ASCII order. Here I’ve run a similar query but using the column X instead of N, which is a text (VARCHAR2 in Oracle):

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID fsqk4fg1t47v5, child number 0
-------------------------------------
select /*+ */ x from demo1 where x is not null order by x
--------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2493 (100)| 10000 |00:00:00.27 | 1644 | 18 | | | |
| 1 | SORT ORDER BY | | 1 | 10000 | 2493 (1)| 10000 |00:00:00.27 | 1644 | 18 | 32M| 2058K| 29M (0)|
|* 2 | INDEX FAST FULL SCAN| DEMO1_X | 1 | 10000 | 389 (0)| 10000 |00:00:00.01 | 1644 | 18 | | | |
--------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("X" IS NOT NULL)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=1) NLSSORT("X",'nls_sort=''FRENCH''')[2000], "X"[VARCHAR2,1000] 2 - "X"[VARCHAR2,1000]

I have created an index on X, and as you can see it can be used to get all X values, but with an Index Fast Full Scan, the multiblock index only access which is fast but does not return rows in the order of the index. And then a sort operation is applied. I can force an Index Full Scan with INDEX() hint but the sort will still have to be done.

The reason can be seen in the column projection note. My Oracle client application is running on a laptop where the OS is in French and Oracle returns the setting according to what the end-user can expect. This is National Language Support. An Oracle database can be accessed by users all around the world and they will see ordered lists, date format, decimal separators,… according to their country and language.

ORDER BY … COLLATE …

My databases has been created in a system which is in English. In Postgres we can get results sorted in French with the COLLATE option of ORDER BY:


explain (analyze,verbose,costs,buffers) select x from demo1 where x is not null order by x collate "fr_FR" ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=5594.17..5619.17 rows=10000 width=1036) (actual time=36.163..37.254 rows=10000 loops=1)
Output: x, ((x)::text)
Sort Key: demo1.x COLLATE "fr_FR"
Sort Method: quicksort Memory: 1166kB
Buffers: shared hit=59
-> Index Only Scan using demo1_x on public.demo1 (cost=0.29..383.29 rows=10000 width=1036) (actual time=0.156..1.559 rows=10000 loops=1)
Output: x, x
Index Cond: (demo1.x IS NOT NULL)
Heap Fetches: 0
Buffers: shared hit=52
Planning time: 0.792 ms
Execution time: 38.264 ms

Same idea here as in Oracle: there is an additional sort operation, which is a blocking operation that needs to be completed before being able to return the first row.

The detail of the cost is the following:

  • The index on the column X has 52 blocks witch is estimated at cost=208 (random_page_cost=4)
  • We have 10000 index entries to process, estimated at cost=50 (cpu_index_tuple_cost=0.005)
  • We have 10000 result rows to process, estimated at cost=100 (cpu_tuple_cost=0.01)
  • We have evaluated 10000 ‘is not null’ conditions, estimated at cost=25 (cpu_operator_cost=0.0025)

In Oracle we can use the same COLLATE syntax, but the name of the language is different, consistent across platforms rather than useing the OS one:


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 82az4syppyndf, child number 0
-------------------------------------
select /*+ */ x from demo1 where x is not null order by x collate "French"
-----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2493 (100)| 10000 |00:00:00.28 | 1644 | | | |
| 1 | SORT ORDER BY | | 1 | 10000 | 2493 (1)| 10000 |00:00:00.28 | 1644 | 32M| 2058K| 29M (0)|
|* 2 | INDEX FAST FULL SCAN| DEMO1_X | 1 | 10000 | 389 (0)| 10000 |00:00:00.01 | 1644 | | | |
-----------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("X" IS NOT NULL)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=1) NLSSORT("X" COLLATE "French",'nls_sort=''FRENCH''')[2000], "X"[VARCHAR2,1000] 2 - "X"[VARCHAR2,1000]

In Oracle, we do not need to use the COLLATE option. The language can be set for the session (NLS_LANGUAGE=’French’) or from the environment (NLS_LANG=’=French_.’). Oracle can share cursors across sessions (to avoid to waste resource compiling and optimizing the same statements used by different sessions) but will not share execution plans among different NLS environments because, as we have seen, the plan can be different. Postgres do not have to manage that because each PREPARE statement does a full compilation and optimization. There is no cursor sharing in Postgres.

Indexing for different languages

We have seen in the Oracle execution plan Column Projection Information that an NLSSORT operation is applied on the column to get a value that follows the collation order of the language. We have seen in the previous post that we can index a function on a column. Then we have the possibility to create an index for different languages. The following index will be used to avoid sort from French users:

create index demo1_x_fr on demo1(nlssort(x,'NLS_SORT=French'));

Since 12cR2 we can create the same with de collate syntax:

create index demo1_x_fr on demo1(x collate "French");

Both syntaxes create the same index, which can be used by queries with ORDER BY … COLLATE or with session that set the NLS_LANGUAGE:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 82az4syppyndf, child number 0
-------------------------------------
select /*+ */ x from demo1 where x is not null order by x collate "French"
-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 4770 (100)| 10000 |00:00:00.02 | 4772 |
|* 1 | TABLE ACCESS BY INDEX ROWID| DEMO1 | 1 | 10000 | 4770 (1)| 10000 |00:00:00.02 | 4772 |
| 2 | INDEX FULL SCAN | DEMO1_X_FR | 1 | 10000 | 3341 (1)| 10000 |00:00:00.01 | 3341 |
-----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("X" IS NOT NULL)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "X"[VARCHAR2,1000] 2 - "DEMO1".ROWID[ROWID,10], "DEMO1"."SYS_NC00004$"[RAW,2000]

There’s no sort operation here as the INDEX FULL SCAN returns the rows in order.

PostgreSQL has the same syntax:

create index demo1_x_fr on demo1(x collate "fr_FR");

and then the query can use this index and bypass the sort operation:

explain (analyze,verbose,costs,buffers) select x from demo1 where x is not null order by x collate "fr_FR" ;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------
Index Only Scan using demo1_x_fr on public.demo1 (cost=0.29..383.29 rows=10000 width=1036) (actual time=0.190..1.654 rows=10000 loops=1)
Output: x, x
Index Cond: (demo1.x IS NOT NULL)
Heap Fetches: 0
Buffers: shared hit=32 read=20
Planning time: 1.049 ms
Execution time: 2.304 ms

Avoiding a sort operation can really improve the performance of queries in two ways: save the resources required by a sort operation (which will have to spill to disk when the workarea do not fit in memory) and avoid a blocking operation and then be able to return the first rows quickly.

We have seen how indexes can be used to access a subset of columns from a smaller structure, and how they can be used to access a sorted version of the rows. Future posts will show how the index access is used to quickly filter a subset of rows. But for the moment I’ll continue on this blocking operation. We have seen a lot of Postgres costs, and they have two values (startup cost and total cost). More on startup cost in the next post.

 

Cet article Postgres vs. Oracle access paths IV – Order By and Index est apparu en premier sur Blog dbi services.

From idea to app or how I do an Oracle APEX project anno 2017

Dimitri Gielis - Sat, 2017-08-05 11:30
For a long time I had in mind to write in great detail how I do an Oracle APEX project from A to Z. But so far I never took the time to actually do it, until today :)

So here's the idea; I love building projects that help people and I love to share what I know, so I will combine both. I will write exactly my thoughts and things I do as I'm moving along with this project, so you have full insight what's happening behind the scenes.
BackgroundWay back, in the year 1999, I build an application in Visual Basic to help children study the multiplication tables. My father was a math teacher and taught people who wanted to become primary school teachers. While doing the visits of the primary schools, he saw the problem that children had difficulties to automate the multiplications from 1 till 10, so together we thought about how we could help them. That is how the Visual Basic application was born. I don't have a working example anymore of the program, but I found some paper prints from that time, which you see here:



We are now almost 20 years later and last year my son had difficulties memorizing the multiplication tables too. I tried sitting next to him and help him out, but when things don't go as smooth as you hope... You have to stay calm and supportive, but I found it hard, especially when there are two other children crying for attention too or you had a rough day yourself... In a way I felt frustrated because I didn't know how to help further in the time I had. At some point I thought about the program I wrote way back then and decided to quickly build a web app that would allow him to train himself. And to make it more fun for him, I told him I would exercise too, so he saw it was doable :)

At KScope16 I showed this web app during Open Mic Night; it was far from fancy, but it did the job.
Here's a quick demo:



Some people recognized my story and asked if I could put the app online. I just build the app quickly for my son, so it needs some more work to make it accessible for others.
During my holidays, I decided I should really treat this project as a real one, otherwise it would never happen, so here we are, that is what I'm going to do and I'll write about it in detail :)
Idea - our requirementThe application helps children (typically between 7 and 11 years old) to automate multiplications between 1 and 10. It also helps their parents to get insight in timings and mistakes of their children's multiplications.
TimelineNo project without deadline, so I've set my go-production date to August 20th, 2017. So I've about 2 weeks, typically one sprint in our projects.
Following along and feedbackI will tweet, blog and create some videos to show my progress. You can follow along and reach me on any of those channels. If you have any questions, tips or remarks during the development, don't hesitate to add a comment. I always welcome new ideas or insights and am happy to go in more detail if something is not clear.
High level break-down of plan for the following days
  • Create user stories and supporting ERD
  • List of the tools I use and why I use them
  • Set up the development environment
  • Create the Oracle database objects
  • Set up a domain name
  • Set up reverse proxy and https
  • Create a landing page and communicate
  • Build the Oracle APEX application: the framework
  • Refine the APEX app: create custom authentication
  • Refine the APEX app: adding the game
  • Refine the APEX app: improve the flow and navigation
  • Refine the APEX app: add ability to print results to PDF
  • Set up build process
  • Check security
  • Communicate first version of the app to registered people
  • Check performance
  • Refine the APEX app: add more reports and statistics
  • Check and reply to feedback
  • Set up automated testing
  • A word on debugging
  • Refine the APEX app: making final changes
  • Set up backups
  • Verify documentation and lessons learned
  • Close the loop and Celebrate :)
So now, let's get started ...
Categories: Development

Pages

Subscribe to Oracle FAQ aggregator