Feed aggregator

Using FBA with Materialized Views

Tom Kyte - Mon, 2018-01-15 05:26
Please refer to the LiveSQL link. NB some of the statement do not work because the user has insufficient privs. to create and manage Flashback areas in the LiveSQL environment. The code creates a table, inserts data , creates a flashback archive t...
Categories: DBA Blogs

Moving tables ONLINE on filegroup with constraints and LOB data

Yann Neuhaus - Mon, 2018-01-15 00:20

Let’s start this new week by going back to a discussion with one of my customers a couple of days ago about moving several tables into different filegroups. Let’s say that some of them contained LOB data. Let’s add to the game another customer requirement: moving all of them ONLINE to avoid impacting the data availability during the migration process. The concerned tables had schema constraints as primary key and foreign keys and non-clustered indexes as well. So a pretty common schema we may deal with daily at customer shops.

Firstly, let’s say that the first topic of the discussion didn’t focus on moving non-clustered indexes on a different filegroup (pretty well-known from my customer) but on how to manage moving constraints online without integrity issues. The main reason of that came from different pointers found by my customer on the internet where we have to first drop such constraints and then to recreate them (by using TO MOVE clause) and that’s whay he was not very confident to move such constraints without introducing integrity issues.

Let’s illustrate this scenario with the following demonstration. I will use a dbo.TransactionHistory2 table that I want to move ONLINE from the primary to the FG1 filegroup. There is a primary key constraint on the TransactionID column as well as foreign key on the ProductID column that refers to dbo.bigProduct table and the ProductID column.

EXEC sp_helpconstraint 'dbo.bigTransactionHistory2';

blog 125 - 1 - bigTransactionHistory2 PK FK

Here a picture of indexes existing on the dbo.bigTransactionHistory2 table:

EXEC sp_helpindex 'dbo.bigTransactionHistory2';

blog 125 - 2 - bigTransactionHistory2 indexes

Let’s say that the pk_big_TranactionHistory_TransactionID unique clustered index is tied to the primary key constraint.

Let’s start by using the first approach based on the WITH MOVE clause .

ALTER TABLE dbo.bigTransactionHistory2 DROP CONSTRAINT pk_bigTransactionHistory_TransactionID WITH (MOVE TO FG1, ONLINE = ON);

--> No constraint to avoid duplicates

ALTER TABLE dbo.bigTransactionHistory2 ADD CONSTRAINT pk_bigTransactionHistory_TransactionID PRIMARY KEY(TransactionDate, TransactionID)
WITH (ONLINE = ON);

By looking further at the script performed  we may quickly figure out that this approach may lead to introduce duplicate entries between the drop constraint step and the move table on the FG1 filegroup and  create constraint step.

We might address this issue by encapsulating the above command within a transaction. But obviously this method has cost: we have good chance to create a long blocking scenario – depending on the amount of data – and leading temporary to data unavailability. The second drawback concerns the performance. Indeed, we first drop the primary key constraint meaning we are dropping the underlying clustered index structure in the background. Going this way implies to rebuild also related non-clustered indexes to update the leaf level with row ids and to rebuild them again when re-adding the primary key constraint in the second step.

From my point of view there is a better way to go through if we want all the steps to be performed efficiently and ONLINE including the guarantee that constraints will continue to ensure checks during all the moving process.

Firstly, let’s move the primary key by using a one-step command. The same applies to the UNIQUE constraints. In fact, moving such constraint requires only to rebuild the corresponding index with the parameters DROP_EXISTING and ONLINE parameters to preserve the constraint functionality. In this case, my non-clustered indexes are not touched by the operation because we don’t have to update the leaf level as the previous method.

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionDate] ASC, [TransactionID] ASC )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON [FG1];

In addition, the good news is if we try to introduce a duplicate key while the index is rebuilding on the FG1 filegroup we will face the following error as expected:

Msg 2627, Level 14, State 1, Line 3
Violation of PRIMARY KEY constraint ‘pk_bigTransactionHistory_TransactionID’.
Cannot insert duplicate key in object ‘dbo.bigTransactionHistory2′. The duplicate key value is (Jan 1 2005 12:00AM, 1).

So now we may safely move the additional structures represented by the non-clustered index. We just have to execute the following command to move ONLINE the corresponding physical structure:

CREATE INDEX [idx_bigTransactionHistory2_ProductID]
ON dbo.bigTransactionHistory2 ( ProductID ) 
WITH (DROP_EXISTING = ON, ONLINE = ON)
ON [FG1]

 

Le’ts continue with the second scenario that consisted in moving a table ONLINE on a different filegroup with LOB data. Moving such data may be more complex as we may expect. The good news is SQL Server 2012 has introduced ONLINE operation capabilities and my customer run on SQL Server 2014.

For the demonstration let’s going back to the previous demo and let’s introduce a new [other infos] column with VARCHAR(MAX) data. Here the new definition of the dbo.bigTransactionHistory2 table:

CREATE TABLE [dbo].[bigTransactionHistory2](
	[TransactionID] [bigint] NOT NULL,
	[ProductID] [int] NOT NULL,
	[TransactionDate] [datetime] NOT NULL,
	[Quantity] [int] NULL,
	[ActualCost] [money] NULL,
	[other infos] [varchar](max) NULL,
 CONSTRAINT [pk_bigTransactionHistory_TransactionID] PRIMARY KEY CLUSTERED 
(
	[TransactionID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

Let’s take a look at the table’s underlying structure:

SELECT 
	OBJECT_NAME(p.object_id) AS table_name,
	p.index_id,
	p.rows,
	au.type_desc AS alloc_unit_type,
	au.used_pages,
	fg.name AS fg_name
FROM 
	sys.partitions as p
JOIN 
	sys.allocation_units AS au on p.hobt_id = au.container_id
JOIN	
	sys.filegroups AS fg on fg.data_space_id = au.data_space_id
WHERE
	p.object_id = OBJECT_ID('bigTransactionHistory2')
ORDER BY
	table_name, index_id, alloc_unit_type

 

blog 125 - 3 - bigTransactionHistory2 with LOB

A new LOB_DATA allocation unit type is there and indicates the table contains LOB data for all the index structures. At this stage, we may think that going to the previous way to move online the unique clustered index is sufficient but it is not according the output below:

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionID] )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON [FG1];

blog 125 - 4 - bigTransactionHistory2 move LOB data

In fact, only data in IN_ROW_DATA allocation units moved from the PRIMARY to FG1 filegroup. In this context, moving LOB data is a non-trivial operation and I had to use a solution based on one proposed here by Kimberly L. Tripp from SQLSkills (definitely one of my favorite sources for tricky scenarios). So partitioning is the way to go. In respect of the solution fom SQLSkills I created a temporary partition function and scheme as shown below:

SELECT MAX([TransactionID])
FROM dbo.bigTransactionHistory2
-- 6910883
GO


CREATE PARTITION FUNCTION pf_bigTransaction_history2_temp (BIGINT)
AS RANGE RIGHT FOR VALUES (6920000)
GO

CREATE PARTITION SCHEME ps_bigTransaction_history2_temp
AS PARTITION pf_bigTransaction_history2_temp
TO ( [FG1], [PRIMARY] )
GO

Applying the scheme to the dbo.bigTransactionHistory2 table will allow us to move all data (IN_ROW_DATA and LOB_DATA) from the PRIMARY to FG1 filegroup as shown below:

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionID] ASC )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON ps_bigTransaction_history2_temp ([TransactionID])

Looking quickly at the storage configuration confirms this time all data moved to the right FG1.

blog 125 - 5 - bigTransactionHistory2 partitioning

Let’s finally remove the temporary partitioning configuration from the table (remember that all operations are performed ONLINE)

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionID] ASC )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON [FG1]

-- Remove underlying partition configuration
DROP PARTITION SCHEME ps_bigTransaction_history2_temp;
DROP PARTITION FUNCTION pf_bigTransaction_history2_temp;
GO

blog 125 - 6 - bigTransactionHistory2 last config

Finally, you can apply the same method for all non-clustered indexes that contain LOB data …

Cheers

 

 

 

 

 

 

 

 

Cet article Moving tables ONLINE on filegroup with constraints and LOB data est apparu en premier sur Blog dbi services.

Spectre and Meltdown on Oracle Public Cloud UEK

Yann Neuhaus - Sun, 2018-01-14 14:12

In the last post I published the strange results I had when testing physical I/O with the latest Spectre and Meltdown patches. There is the logical I/O with SLOB cached reads.

Logical reads

I’ve run some SLOB cache reads with the latest patches, as well as with only KPTI disabled, and with KPTI, IBRS and IBPB disabled.
I am on the Oracle Public Cloud DBaaS with 4 OCPU

DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 670,001.2
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 671,145.4
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 672,464.0
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 685,706.7 nopti
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 689,291.3 nopti
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 689,386.4 nopti
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 699,301.3 nopti noibrs noibpb
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 704,773.3 nopti noibrs noibpb
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 704,908.2 nopti noibrs noibpb

This is what I expected: when disabling the mitigation for Meltdown (PTI), and for some of the Spectre (IBRS and IBPB), I have a slightly better performance – about 5%. This is with only one SLOB session.

However, with 2 sessions I have something completely different:

DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,235,637.8 nopti noibrs noibpb
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,237,689.6 nopti
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,243,464.3 nopti noibrs noibpb
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,247,257.4 nopti
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,247,257.4 nopti noibrs noibpb
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,251,485.1
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,253,477.0
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,271,986.7

This is not a saturation situation here. My VM shape is 4 OCPUs, which is supposed to be the equivalent of 4 hyperthreaded cores.

And this figure is even worse with 4 sessions (all cores used) and more:

DB Time(s) : 4.0 DB CPU(s) : 4.0 Logical read (blocks) : 2,268,272.3 nopti noibrs noibpb
DB Time(s) : 4.0 DB CPU(s) : 4.0 Logical read (blocks): 2,415,044.8


DB Time(s) : 6.0 DB CPU(s) : 6.0 Logical read (blocks) : 3,353,985.7 nopti noibrs noibpb
DB Time(s) : 6.0 DB CPU(s) : 6.0 Logical read (blocks): 3,540,736.5


DB Time(s) : 8.0 DB CPU(s) : 7.9 Logical read (blocks) : 4,365,752.3 nopti noibrs noibpb
DB Time(s) : 8.0 DB CPU(s) : 7.9 Logical read (blocks): 4,519,340.7

The graph from those is here:
CaptureOPCLIO001

If I compare with the Oracle PaaS I tested last year (https://blog.dbi-services.com/oracle-public-cloud-liops-with-4-ocpu-in-paas/) which was on Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz you can also see a nice improvement here on Intel(R) Xeon(R) CPU E5-2699C v4 @ 2.20GHz.

This test was on 4.1.12-112.14.10.el7uek.x86_64 and Oracle Linux has now released a new update: 4.1.12-112.14.11.el7uek

 

Cet article Spectre and Meltdown on Oracle Public Cloud UEK est apparu en premier sur Blog dbi services.

Docker-CE: How to modify containers with overlays / How to add directories to a standard docker image

Dietrich Schroff - Sun, 2018-01-14 13:01
After some experiments with docker i wanted to run a tomcat with my own configuration (e.g. memory settings, ports, ...).


My first idea was: Download tomcat, configure everything and then build an image.
BUT: After i learned how to use the -v (--volume) flag for adding some file via the docker command to an image i was wondering, wether creating a new image with only the additional files on top of standard tomcat docker image.

So first step is to take a look at all local images:
# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
558MB
friendlyhello       latest              976ee2bb47bf        3 days ago          148MB
tomcat              latest              11df4b40749f        8 days ago          558MB
I can use tomcat:latest. (if it is not there just pull it: docker pull tomcat)
Next step is to create a directory and add all the directories which you want to override.
For my example:
mkdir conftomcat
cd conftomcat
mkdir binInto the bin directory i put all the files from the tomcat standard container:
# ls bin
bootstrap.jar  catalina-tasks.xml  commons-daemon-native.tar.gz  daemon.sh  setclasspath.sh  startup.sh       tool-wrapper.sh
catalina.sh    commons-daemon.jar  configtest.sh                 digest.sh  shutdown.sh      tomcat-juli.jar  version.sh
Inside the catalina.sh i added -Xmx384M.
In conftomcat i created the following Dockerfile:
FROM tomcat:latest
WORKDIR /usr/local/tomcat/bin
ADD status /usr/local/tomcat/webapps/mystatus
ADD bin /usr/local/tomcat/bin
ENTRYPOINT [ "/usr/local/tomcat/bin/catalina.sh" ]
CMD [ "run"]And as you can see i added my index.jsp which is inside status (s. this posting).
Ok. Let's see, if my plan works:
#docker build  -t  mytomcat .ending build context to Docker daemon  375.8kB
Step 1/6 : FROM tomcat:latest
 ---> 11df4b40749f
Step 2/6 : WORKDIR /usr/local/tomcat/bin
 ---> Using cache
 ---> 5696a9ab99cb
Step 3/6 : ADD status /usr/local/tomcat/webapps/mystatus
 ---> 1bceea5af515
Step 4/6 : ADD bin /usr/local/tomcat/bin
 ---> e8d3a386a7f0
Step 5/6 : ENTRYPOINT [ "/usr/local/tomcat/bin/catalina.sh" ]
 ---> Running in a04038032bb7
Removing intermediate container a04038032bb7
 ---> 4c8fda05df18
Step 6/6 : CMD [ "run"]
 ---> Running in cce378648e7a
Removing intermediate container cce378648e7a
 ---> 72ecfe2aa4a7
Successfully built 72ecfe2aa4a7
Successfully tagged mytomcat:latest
and then start:
docker run -p 4001:8080 mytomcat Let's check the memory settings:
$ ps aux|grep java
root      2313 20.7  8.0 2418472 81236 ?       Ssl  19:51   0:02 /docker-java-home/jre/bin/java -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Xmx394M -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -classpath /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar -Dcatalina.base=/usr/local/tomcat -Dcatalina.home=/usr/local/tomcat -Djava.io.tmpdir=/usr/local/tomcat/temp org.apache.catalina.startup.Bootstrap start
Yes - changed to 384M.
And check the jsp:



Yippie!
As you can see, i have the standard tomcat running with an override inside the configuration to 384M. So it should be easy to add certificates, WARs, ... to such a standard container.

Will you help me build the zoo of programming languages?

Amis Blog - Sun, 2018-01-14 12:23

Have you ever come across the following challenge? You have to name something; your own project, own product, company, your boat or even your own child. Coming up with the right name is very important since this is something you have worked on for a long time. So the name has to reflect your inspiration and effort. You used your own blood sweat and tears creating this. Spend many long lonely nights to finalize (just forget the child metaphor here). And now you are ready to launch it. But wait….. it has no name. Best way to name something is to find an example in nature. And animals are powerful and good inspirations for names. Here are 14 programming languages, software, and tools who are named after an animal grouped together in my zoo of programming languages. And there are probably many more. Feel free to help me and add yours as comments on this article.

Impala

Image resultThis is one of the major tools for querying a big data database. Impala is a tool for querying big data. Impala is a query engine that runs on Hadoop. Impala offers scalable parallel database technology to Hadoop, enabling users to issue low-latency SQL queries to data stored in HDFS and Apache HBase without requiring data movement or transformation. Impala is integrated with Hadoop to use the same file and data formats, metadata, security and resource management frameworks used by MapReduce, Apache Hive, Apache Pig and other Hadoop software.Image result for impala male
Impala is promoted for analysts and data scientists to perform analytics on data stored in Hadoop via SQL or business intelligence tools. The result is that large-scale data processing (via MapReduce) and interactive queries can be done on the same system using the same data and metadata – removing the need to migrate data sets into specialized systems and/or proprietary formats simply to perform the analysis. https://impala.apache.org/

The other Impala is a medium-sized antelope found in eastern and southern Africa. The sole member of the genus Aepyceros.

Toad

Image result for quest toadToad is a database management toolset from Quest Software that database developers, database administrators, and data analysts use to manage both relational and non-relational databases using SQL. There are Toad products for developers and DBAs, which run on Oracle, SQL Server, IBM DB2 (LUW & z/OS), SAP and MySQL, as well as, a Toad product for data preparation, which supports most data platforms. Toad solutions enable data professionals to automate processes, minimize risks and cut project delivery timelines. https://www.quest.com/toad/Image result for toad

The other toad Is a common name for certain frogs, especially of the family Bufonidae, that is characterized by dry, leathery skin, short legs, and large bumps covering the parotoid glands. Wikipedia

Elk

Image result for elastic stackThe ELK (now called Elastic Stack) stack consists of Elasticsearch, Logstash, and Kibana. Although they’ve all been built to work exceptionally well together, each one is a separate project that is driven by the open-source vendor Elastic—which itself began as an enterprise search platform vendor. It has now become a full-service analytics software company, mainly because of the success of the ELK stack. Wide adoption of Elasticsearch for analytics has been the main driver of its popularity.
Elasticsearch is a juggernaut solution for your data extraction problems. A single developer can use it to find the high-value needles underneath all of your data haystacks, so you can put your team of data scientists to work on another project.  https://en.wikipedia.org/wiki/ElasticsearchImage result for elk

The other elk, or wapiti (Cervus canadensis), is one of the largest species within the deer family, Cervidae, in the world, and one of the largest land mammals in North America and Eastern Asia. This animal should not be confused with the still larger moose (Alces alces) to which the name “elk” applies in British English and in reference to populations in Eurasia.

Ant

Related imageApache Ant is a software tool for automating software build processes, which originated from the Apache Tomcat project in early 2000. It was a replacement for the Make build tool of Unix and was created due to a number of problems with Unix’s make. It is similar to Make but is implemented using the Java language, requires the Java platform, and is best suited to building Java projects. The most immediately noticeable difference between Ant and Make is that Ant uses XML to describe the build process and its dependencies, whereas Make uses Makefile format. By default, the XML file is named build.xml. Ant is an open-source project, released under the Apache License, by Apache Software Foundation. https://ant.apache.org/index.htmlImage result for ant

The other Ant is a eusocial insect of the family Formicidae and, along with the related wasps and bees, belong to the order Hymenoptera. Ants evolved from wasp-like ancestors in the Cretaceous period, about 99 million years ago, and diversified after the rise of flowering plants. More than 12,500 of an estimated total of 22,000 species have been classified. They are easily identified by their elbowed antennae and the distinctive node-like structure that forms their slender waists.

Rhino

Inicio de ldp para 260px50px moziyarinocrnt.jpgThe Rhino project was started at Netscape in 1997. At the time, Netscape was planning to produce a version of Netscape Navigator written fully in Java and so it needed an implementation of JavaScript written in Java. When Netscape stopped work on Javagator, as it was called, the Rhino project was finished as a JavaScript engine. Since then, a couple of major companies (including Sun Microsystems) have licensed Rhino for use in their products and paid Netscape to do so, allowing work to continue on it. https://developer.mozilla.org/en-US/docs/Mozilla/Projects/RhinoImage result for rhino

The other Rhino ( rhinoceros, from Greek rhinokeros, meaning ‘nose-horned’, from rhinos, meaning ‘nose’, and keratos, meaning ‘horn’), commonly abbreviated to rhino, is one of any five extant species of odd-toed ungulates in the family Rhinocerotidae, as well as any of the numerous extinct species. Two of the extant species are native to Africa and three to Southern Asia.

Python

Image result for python softwarePython is an interpreted high-level programming language for general-purpose programming. Created by Guido van Rossum and first released in 1991, Python has a design philosophy that emphasizes code readability, and a syntax that allows programmers to express concepts in fewer lines of code, notably using significant whitespace. It provides constructs that enable clear programming on both small and large scales. Python features a dynamic type system and automatic memory management. It supports multiple programming paradigms, including object-oriented, imperative, functional and procedural, and has a large and comprehensive standard library. https://en.wikipedia.org/wiki/Python_(programming_language)Image result for python snake

The other Python, is a genus of nonvenomous Pythonidae found in Africa and Asia. Until recently, seven extant species were recognised; however, three subspecies have been promoted and a new species recognized. A member of this genus, Python reticulatus, is among the longest snake species and extant reptiles in the world.

Goat

Related imageWebGoat or GOAT is a deliberately insecure web application maintained by OWASP designed to teach web application security lessons. This program is a demonstration of common server-side application flaws. The exercises are intended to be used by people to learn about application security and penetration testing techniques. https://www.owasp.org/index.php/Category:OWASP_WebGoat_ProjectImage result for goat

The other goat is a member of the family Bovidae and is closely related to the sheep as both are in the goat-antelope subfamily Caprinae. There are over 300 distinct breeds of goat. Goats are one of the oldest domesticated species and have been used for their milk, meat, hair, and skins over much of the world.

Lama

LAMA is a framework for developing hardware-independent, high-performance code for heterogeneous computing systems. It facilitates the development of fast and scalable software that can be deployed on nearly every type of system with a single code base. The framework supports multiple target platforms within a distributed heterogeneous environment. It offers optimized device code on the backend side, high scalability through latency hiding and asynchronous execution across multiple nodes. https://www.libama.org/Image result for lama animal

The other Lama (Lama glama) is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the Pre-Columbian era.
They are very social animals and live with other llamas as a herd. The wool produced by a llama is very soft and lanolin-free. Llamas are intelligent and can learn simple tasks after a few repetitions. When using a pack, they can carry about 25 to 30% of their body weight for 8 to 13 km (5–8 miles).[5]

Serpent

The serpent is one of the high-level programming languages used to write Ethereum contracts. The language, as suggested by its name, is designed to be very similar to Python; it is intended to be maximally clean and simple, combining many of the efficiency benefits of a low-level language with ease-of-use in programming style, and at the same time adding special domain-specific features for contract programming. The latest version of the Serpent compiler, available on GitHub, is written in C++, allowing it to be easily included in any client.

The serpent, or snake, is one of the oldest and most widespread mythological symbols. The word is derived from Latin serpents, a crawling animal or snake. Snakes have been associated with some of the oldest rituals known to humankind and represent the dual expression of good and evil.

Penguin

LogoPENGUIN is a grammar-based language for programming graphical user interfaces. Code for each thread of control in a multi-threaded application is confined to its own module, promoting modularity and reuse of code. Networks of PENGUIN components (each composed of an arbitrary number of modules) can be used to construct large reactive systems with parallel execution, internal protection boundaries, and plug-compatible communication interfaces. We argue that the PENGUIN building-block approach constitutes a more appropriate framework for user interface programming than the traditional Seeheim Model. We discuss the design of PENGUIN and relate our experiences with applications. https://en.wikipedia.org/wiki/Penguin_SoftwareImage result for penguin

The other Penguin(order Sphenisciformes, family Spheniscidae) are a group of aquatic, flightless birds. They live almost exclusively in the Southern Hemisphere, with only one species, the Galapagos penguin, found north of the equator. Highly adapted for life in the water, penguins have countershaded dark and white plumage, and their wings have evolved into flippers. Most penguins feed on krill, fish, squid and other forms of sea life caught while swimming underwater. They spend about half of their lives on land and half in the oceans. Although almost all penguin species are native to the Southern Hemisphere, they are not found only in cold climates, such as Antarctica. In fact, only a few species of penguin live so far south. Several species are found in the temperate zone, and one species, the Galápagos penguin, lives near the equator.

Cheetah

Image result for cheetah template engineCheetah is a Python-powered template engine and code generator. It can be used standalone or combined with other tools and frameworks. Web development is its principle use, but Cheetah is very flexible and is also being used to generate C++ game code, Java, SQL, form emails and even Python code.  Cheetah is an open source template engine and code-generation tool written in Python. Cheetah can be used unto itself or incorporated with other technologies and stacks regardless of whether they’re written in Python or not. https://pythonhosted.org/Cheetah/Image result for cheetah

At its core, Cheetah is a domain-specific language for markup generation and templating which allows for full integration with existing Python code but also offers extensions to traditional Python syntax to allow for easier text-generation.

Porcupine

Image result for porcupine application serverPorcupine is an open-source Python-based Web application server that provides front-end and back-end revolutionary technologies for building modern data-centric Web 2.0 applications. Many of the tasks required for building web applications as you know them, are either eliminated or simplified. For instance, when developing a Porcupine application you don’t have to design a relational database. You only have to design and implement your business objects as Python classes, using the building blocks provided by the framework (data-types). Porcupine integrates a native object – key/value database, therefore the overheads required by an object-relational mapping technique when retrieving or updating a single object are removed. http://www.innoscript.org/Image result for porcupine

The other Porcupines are rodentian mammals with a coat of sharp spines, or quills, that protect against predators. The term covers two families of animals, the Old World porcupines of family Hystricidae, and the New World porcupines of family Erethizontidae. Both families belong to the infraorder Hystricognathi within the profoundly diverse order Rodentia and display superficially similar coats of quills: despite this, the two groups are distinct from each other and are not closely related to each other within the Hystricognathi.

Orca

Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data types. The implementation of Orca takes care of the physical distribution of objects among the local memories of the processors. In particular, an implementation may replicate and/or migrate objects in order to decrease access times to objects and increase parallelism.
programming language for distributed systems http://courses.cs.vt.edu/~cs5314/Lang-Paper-Presentation/Papers/HoldPapers/ORCA.pdfImage result for ORCA

The other orca (Orcinus orca) is a toothed whale belonging to the oceanic dolphin family, of which it is the largest member. Killer whales have a diverse diet, although individual populations often specialize in particular types of prey. Some feed exclusively on fish, while others hunt marine mammals such as seals and dolphins. They have been known to attack baleen whale calves, and even adult whales. Killer whales are apex predators, as there is no animal that preys on them. Killer whales are considered a cosmopolitan species, and can be found in each of the world’s oceans in a variety of marine environments, from Arctic and Antarctic regions to tropical seas – Killer whales are only absent from the Baltic and Black seas, and some areas of the Arctic ocean.

Seagul

SeagullSeagull is an Open Source (GPL) multi-protocol traffic generator test tool. Primarily aimed at IMS (3GPP, TISPAN, CableLabs) protocols (and thus being the perfect complement to SIPp for IMS testing), Seagull is a powerful traffic generator for functional, load, endurance, stress and performance/benchmark tests for almost any kind of protocol. Seagul is a traffic generator for load testing. Created by HP and released in 2006. http://gull.sourceforge.net/Image result for seagull

The other Seagull is a seabird of the family Laridae in the suborder Lari. They are most closely related to the terns (family Sternidae) and only distantly related to auks, skimmers, and more distantly to the waders. Until the 21st century, most gulls were placed in the genus Larus, but this arrangement is now known to be polyphyletic, leading to the resurrection of several genera.

Sloth

Sloth is the world’s slowest computer language. Proudly announced by Lary Page at the 2014 Google WWDC as a reaction on Microsoft C-flat-minor. Both languages are still competing in the race for the slowest computer language. Sloth stands for Seriously Low Optimization ThresHolds, has been under development for a really, really long time. I mean, like, forever, man. https://www.eetimes.com/author.asp?doc_id=1322644

Larry Page at the recent WWDC introducing SLOTH.

The other Sloths are arboreal mammals noted for the slowness of movement and for spending most of their lives hanging upside down in the trees of the tropical rainforests of South America and Central America. The six species are in two families: two-toed sloths and three-toed sloths. In spite of this traditional naming, all sloths actually have three toes. The two-toed sloths have two digits, or fingers, on each forelimb. The sloth is so named because of its very low metabolism and deliberate movements, sloth being related to the word slow.

Image result for sloth

 

Add your languages

Hope you enjoyed this small tour. There are probably many more languages named after animals. Please add them as comments and I will update the article. Hopefully, we can cover the entire animal kingdom. Thank you in advance for your submissions.

Sources from Wikipedia.

 

The post Will you help me build the zoo of programming languages? appeared first on AMIS Oracle and Java Blog.

DBMS_AQ.LISTEN to listen to a Single/Multi-Consumer Queue

Tom Kyte - Sun, 2018-01-14 11:06
Dear Experts, Need your guidance/suggestions to resolve this issue: Part of oracle advance queueing implementation, we've to dequeue the message as soon as it has been enqueued into the queue. This should happen immediately without any manual inter...
Categories: DBA Blogs

Doing DB upgrade RAC , via DBUA, from 11gr2 to 12cr2 . Using TDE (tablespace level) on source database

Tom Kyte - Sun, 2018-01-14 11:06
I am running a DB 11.2.0.4 (RAC db) that has TDE implemented - Tablespace level. Source db (11.2.0.4) has TDE implemented. sqlnet.ora file on each node has the entry ENCRYPTION_WALLET_LOCATION. Also each node has the wallet and auto login file (t...
Categories: DBA Blogs

Audit Trail : Disable my bash script audit

Tom Kyte - Sun, 2018-01-14 11:06
Hello Tom. I set audit trail to "XML,EXTENDED" , because my $AUD table was growing to much. I have a lot of 4kb files generated. I have several scripts in my crontab, and that is what is being audited. The content of the files are like this:...
Categories: DBA Blogs

YTD logic using analytic functions

Tom Kyte - Sun, 2018-01-14 11:06
Hi Tom, I am trying to get YTD in a view. I have below view, <code>create or replace view billsummary as select szRegionCode, szState, szPartitionCode, szProduct, TO_CHAR(dtSnapshot,'YYYY.MM') szMonthYear, szJioCenter, ...
Categories: DBA Blogs

Configuring a SQL Loader control File to exclude the second row

Tom Kyte - Sun, 2018-01-14 11:06
Hi, I am trying to configure a control file that excludes the second line of data from the load. The system is automated and I have been tasked to see if there is a solution to this. I am very new at this. I have been told about a discard file of ...
Categories: DBA Blogs

Dynamic query to print out any table

Tom Kyte - Sun, 2018-01-14 11:06
Hi Tom How i can use procedure have a parameter type of query 'any query' and print the data looks like comma separated ? please help ..
Categories: DBA Blogs

Truncate statement in data dictionary,

Tom Kyte - Sun, 2018-01-14 11:06
Hello, I have observed truncate statement (command_type = 85) doesn't appear in V$SQL. However, it does in V$SQLTEXT and V$SQLTEXT_WITH_NEWLINES. My intention is to extract the time of the truncate statement. How can I achieve this task witho...
Categories: DBA Blogs

Tracking User logins between 7:00 pm and 7:00 am

Tom Kyte - Sun, 2018-01-14 11:06
Hello Sir, I have a requirement to track and generate a report of the users logging into the database after office hours, i.e., between 7:00 pm and 7:00 am on a daily basis. We have audit_trail set to 'DB' I appreciate if you can help me in ...
Categories: DBA Blogs

Loading CLOB Columns from File

Tom Kyte - Sun, 2018-01-14 11:06
Loading table data from external source to oracle for data mining and analysis. One particular table has 7 8000 byte character fields which must be loaded into CLOB columns. Most of these are empty. The data is provided as a tab delimited text file w...
Categories: DBA Blogs

Not getting connection with database in cmd window

Tom Kyte - Sun, 2018-01-14 11:06
Hi, I have just installed the Oracle Database 11g Express Edition from www.oracle.com. When I was connecting database in command prompt window I am getting error while entering paasword for user name "system". I am typing below the exact error w...
Categories: DBA Blogs

Dockerizing Your Development and Test Oracle databases

Debu Panda - Sat, 2018-01-13 19:07
You are probably reading this blog because your application depends on Oracle database. Most enterprises in the world depend on Oracle database to run their business. If you are a developer using Oracle database and getting started with Docker, you must be wondering how can you use a containerized Oracle database. 

In my last blog, I outlined how you can use a Dockerized Tomcat /Tom EE with an Oracle database. In this blog, I will describe how you can use Dockerized Oracle database for your development or test activities.

If you want to get started with Docker, review their getting started guide.

Getting Docker Images
Oracle provides Docker images for Oracle Database and you don't have to build using Dockerfile. You can get Oracle Database Docker images either from Docker store or Oracle Container Registry.

You have to register and accept licenses in Docker store or Oracle Container Registry.

In this blog, I will outline the steps required for the Docker image downloaded from Oracle Container Registry.

Login to Container-Registry 
You can log in to the Oracle Container Registry as follows:

docker login container-registry.oracle.com
Username :      
Password:

The Oracle Container Registry provides option to  download images for Oracle Standard or Enterprise Edition (12.2.0.1).

Note that the download may take several minutes or up to couple of hours based on your internet bandwidth. 
Download Oracle Database EE
If you want to download the docker image for Oracle Database Enterprise Edition, you can use the Docker pull command as follows:

docker pull container-registry.oracle.com/database/enterprise:12.2.0.1


You will get output as below if the your command is successful:

12.2.0.1: Pulling from database/enterprise
cbb9821ba51c: Downloading [>                                                  ]  1.599MB/81.5MB
9bd4d110366e: Downloading [>                                                  ]  1.067MB/143MB
af8b29651e27: Download complete 
4c242ab1add4: Download complete 
7bda1e55bd08: Downloading [>                                                  ]  1.599MB/2.737GB

Download Oracle Database SE
In my example, I am going to use Oracle Database Standard Edition. You can download the image for Oracle DB SE as below:

docker pull container-registry.oracle.com/database/standard

You will see output as below:

Using default tag: latest
latest: Pulling from database/standard
Digest: sha256:fad41f7b4b885f13943872218a73c7f051e2caed0b5d5620d8f6f1287cf44918
Status: Image is up to date for container-registry.oracle.com/database/standard:latest

Download Issues
You will get authentication errors if you have not logged to the Docker registry.

Ensure that you accepted the Oracle license agreement in the Oracle Container Registry, otherwise you will get an error message as below:

Error response from daemon: pull access denied for database/standard, repository does not exist or may require 'docker login'

Checking Docker Images
You can check Docker images available by using the following command:

docker images | grep oracle


container-registry.oracle.com/java/serverjre           8                   daea2cf635d1        5 weeks ago         280MB
container-registry.oracle.com/database/instantclient   latest              fda46de41de3        4 months ago        407MB
container-registry.oracle.com/database/standard        latest              faa877d7fbdd        7 months ago        5.16GB

DB Config file
The Oracle DB container requires a configuration file where you can specify few parameters such as Database SID, Password, etc.

Here is the db.properties file that I used. As you can see I changed the default password and the domain for my database.


DB_SID=ORCL

## db passwd
## default : Oracle

DB_PASSWD=welcome1

## db domain
## default : localdomain

DB_DOMAIN=us.oracle.com

## db bundle
## default : basic
## valid : basic / high / extreme
## (high and extreme are only available for enterprise edition)

DB_BUNDLE=basic


Starting Database Container
You can start the Database container by using the command as shown below. If you have not downloaded the database image, then database image will be automatically pulled from the container repository. 
            

docker run -d --env-file db.properties -p 1521:1521 -p 5500:5500 --name orcldb --net appnet  --shm-size="4g" -v /Users/dpanda/orderapp2/orcl:/u04/app:/u04/app container-registry.oracle.com/database/standard

The container will start and database will be ready to use within few minutes.

Reviewing Key Parameters 

Let’s review some of the key parameters I specified.

·       The --shm-size="4g" parameter sets the size of shared memory i.e. /dev/shm for the container to 4GB. 

·       The --name orcldb parameter sets the name of the container to orcldb. You can login to the container with that name or other containers can communicate to this container in that name when using SQLNet or JDBC. You can use this name to stop or remove the container.

·       The --net appnet is connecting the container to the bridge network named appnet.

·       The -v /Users/dpanda/orderapp2/orcl:/u04/app option lets the container map the /u04/app drive to the local volume (/Users/dpanda/orderapp2/orcl) of my MAC. This mapping allows the database to create the redo logs into my local drive. Also this will enable me run the SQL scripts that I have in my local drive to run inside the container.

You can check the status of the running containers by using the docker ps command as below:


CONTAINER ID        IMAGE                                             COMMAND                  CREATED             STATUS              PORTS                                            NAMES
9101006044e9        container-registry.oracle.com/database/standard   "/bin/sh -c '/bin/..."   2 minutes ago       Up 2 minutes        0.0.0.0:1521->1521/tcp, 0.0.0.0:5500->5500/tcp   orcldb
fccce8035b91        orderapp                                          "catalina.sh run"        46 hours ago        Up 46 hours         0.0.0.0:8080->8080/tcp

As you can see, the orcldb container running my Oracle database started up 2 minutes ago.                           

Executing Commands in the Container
Now that the container is running you can run commands inside the container by executing docker exec command

You can login to the container as below and check whether things are set properly.

Note that these are purely optional steps and are not required.

1. Login to the Container

docker exec -it orcldb /bin/bash
[root@9101006044e9 /]# 

2. Switch Linux user to oracle user from root

su – oracle

Last login: Sat Jan 13 05:50:33 UTC 2018 on pts/0

3. You can check few things such as Oracle Environment Variables

 echo $ORACLE_SID
ORCL

 echo $ORACLE_HOME
/u01/app/oracle/product/12.1.0/dbhome_1

4. Connect to SQLPlus 

sqlplus sys/welcome1 as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sat Jan 13 07:03:14 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production

SQL> 


Accessing the Oracle Enterprise Manager Database Console

You can access the EM Console as https://localhost:5500/em

You can login with the user id as sys or system and password you specified in the db.properties file while starting the container. 




After you login you will see the Database Home Page as below. You can see the name of your container as your database host.



Your Dockerized Oracle Database is now ready for use! 


Try exploring until I next time !  We will see how you can run use SQLPlus with the Dockerized database.

Spectre/Meltdown on Oracle Public Cloud UEK – PIO

Yann Neuhaus - Sat, 2018-01-13 10:24

The Spectre and Meltdown is now in the latest Oracle UEK kernel, after updating it with ‘yum update':

[opc@PTI ~]$ rpm -q --changelog kernel-uek
| awk '/CVE-2017-5715|CVE-2017-5753|CVE-2017-5754/{print $NF}' | sort | uniq -c
43 {CVE-2017-5715}
16 {CVE-2017-5753}
71 {CVE-2017-5754}

As I did on the previous post on AWS, I’ve run quick tests on the Oracle Public Cloud.

Physical reads

I’ve run some SLOB I/O reads with the patches, as well sit KPTI disabled, and with KPTI, IBRS and IBPB disabled.

And I was quite surprised by the result:


DB Time(s) : 1.0 DB CPU(s) : 0.4 Read IO requests : 23,335.6 nopti
DB Time(s) : 1.0 DB CPU(s) : 0.4 Read IO requests : 23,420.3 nopti
DB Time(s) : 1.0 DB CPU(s) : 0.4 Read IO requests : 24,857.6
DB Time(s) : 1.0 DB CPU(s) : 0.4 Read IO requests : 25,332.1


DB Time(s) : 2.0 DB CPU(s) : 0.7 Read IO requests : 39,857.7 nopti
DB Time(s) : 2.0 DB CPU(s) : 0.7 Read IO requests : 40,088.4 nopti
DB Time(s) : 2.0 DB CPU(s) : 0.7 Read IO requests : 40,627.0
DB Time(s) : 2.0 DB CPU(s) : 0.7 Read IO requests : 40,707.5


DB Time(s) : 4.0 DB CPU(s) : 0.9 Read IO requests : 47,491.4 nopti
DB Time(s) : 4.0 DB CPU(s) : 0.9 Read IO requests : 47,491.4 nopti
DB Time(s) : 4.0 DB CPU(s) : 0.9 Read IO requests : 49,438.2
DB Time(s) : 4.0 DB CPU(s) : 0.9 Read IO requests : 49,764.5


DB Time(s) : 8.0 DB CPU(s) : 1.2 Read IO requests : 54,227.9 nopti
DB Time(s) : 8.0 DB CPU(s) : 1.2 Read IO requests : 54,582.9 nopti
DB Time(s) : 8.0 DB CPU(s) : 1.3 Read IO requests : 57,288.6
DB Time(s) : 8.0 DB CPU(s) : 1.4 Read IO requests : 57,057.2

Yes. I all tests that I’ve done, the IOPS is higher with KPTI enabled vs. when booting the kernel with the nopti option. Here is a graph with those numbers:

CaptureOPCPIO001

I did those tests on the Oracle Cloud because I know that we have very fast I/O here, in hundreds of milliseconds, probably all cached in the storage:

Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Total Wait Avg % DB Wait
Event Waits Time (sec) Wait time Class
------------------------------ ----------- ---------- --------- ------ --------
db file parallel read 196,921 288.8 1.47ms 48.0 User I/O
db file sequential read 581,073 216.3 372.31us 36.0 User I/O
DB CPU 210.5 35.0
 
% of Total Waits
----------------------------------------------- Waits
Total 1ms
Event Waits <8us <16us <32us <64us <128u <256u =512 Event to 32m <512 <1ms <2ms <4ms <8ms <16ms =32m
------------------------- ------ ----- ----- ----- ----- ----- ----- ----- ----- ------------------------- ------ ----- ----- ----- ----- ----- ----- ----- -----
db file parallel read 196.9K .0 1.0 99.0 db file parallel read 194.9K 1.0 15.4 74.7 8.5 .3 .1 .0 .0
db file sequential read 581.2K 17.3 69.5 13.3 db file sequential read 77.2K 86.7 10.7 2.3 .2 .1 .0 .0 .0
 

So what?

I expected to have higher IOPS when disabling the page table isolation, because of the overhead of context switches. And it is the opposite here. Maybe this is because I have a very small SGA (because my goal is to have only physical reads). Note also that, as far as I know, only my guest OS has been patched for Meltdown and Spectre. We will see if the numbers are different after the next Oracle Cloud maintenance.

 

Cet article Spectre/Meltdown on Oracle Public Cloud UEK – PIO est apparu en premier sur Blog dbi services.

Ubuntu Intel Spectre/Meltdown update

Dietrich Schroff - Sat, 2018-01-13 08:48
One week after the rumors about Spectre and Meltdown (s. Project Zero Blog) my Ubuntu 17.10 got the Intel microcode patch:


root@zerberus:~# apt-get upgrade
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.      
Statusinformationen werden eingelesen.... Fertig
Paketaktualisierung (Upgrade) wird berechnet... Fertig
Die folgenden Pakete wurden automatisch installiert und werden nicht mehr benötigt:
  linux-headers-4.13.0-17 linux-headers-4.13.0-17-generic
  linux-image-4.13.0-17-generic linux-image-extra-4.13.0-17-generic
Verwenden Sie »apt autoremove«, um sie zu entfernen.
Die folgenden Pakete sind zurückgehalten worden:
  linux-generic linux-headers-generic linux-image-generic
Die folgenden Pakete werden aktualisiert (Upgrade):
  gir1.2-javascriptcoregtk-4.0 gir1.2-poppler-0.18 gir1.2-webkit2-4.0
  intel-microcode libjavascriptcoregtk-4.0-18 libpoppler-glib8 libpoppler68
  libruby2.3 libwebkit2gtk-4.0-37 libwebkit2gtk-4.0-37-gtk2 linux-libc-dev
  poppler-utils ruby2.3
13 aktualisiert, 0 neu installiert, 0 zu entfernen und 3 nicht aktualisiert.
Es müssen 30,5 MB an Archiven heruntergeladen werden.
Nach dieser Operation werden 321 kB Plattenplatz zusätzlich benutzt.
Möchten Sie fortfahren? [J/n]

Holen:1 http://de.archive.ubuntu.com/ubuntu artful-updates/universe amd64 libwebkit2gtk-4.0-37-gtk2 amd64 2.18.5-0ubuntu0.17.10.1 [9.026 kB]
Holen:2 http://de.archive.ubuntu.com/ubuntu artful-updates/main amd64 libwebkit2gtk-4.0-37 amd64 2.18.5-0ubuntu0.17.10.1 [11,2 MB]                                                      
Holen:3 http://de.archive.ubuntu.com/ubuntu artful-updates/main amd64 libjavascriptcoregtk-4.0-18 amd64 2.18.5-0ubuntu0.17.10.1 [4.052 kB]                                              
Holen:4 http://de.archive.ubuntu.com/ubuntu artful-updates/main amd64 gir1.2-webkit2-4.0 amd64 2.18.5-0ubuntu0.17.10.1 [67,6 kB]                                                        
Holen:5 http://de.archive.ubuntu.com/ubuntu artful-updates/main amd64 gir1.2-javascriptcoregtk-4.0 amd64 2.18.5-0ubuntu0.17.10.1 [21,0 kB]                                              
Holen:6 http://de.archive.ubuntu.com/ubuntu artful-updates/main amd64 poppler-utils amd64 0.57.0-2ubuntu4.2 [141 kB]                                                                    
Holen:7 http://de.archive.ubuntu.com/ubuntu artful-updates/main amd64 libpoppler-glib8 amd64 0.57.0-2ubuntu4.2 [108 kB]                                                                 
Holen:8 http://de.archive.ubuntu.com/ubuntu artful-updates/main amd64 libpoppler68 amd64 0.57.0-2ubuntu4.2 [787 kB]                                                                     
Holen:9 http://de.archive.ubuntu.com/ubuntu artful-updates/main amd64 gir1.2-poppler-0.18 amd64 0.57.0-2ubuntu4.2 [18,4 kB]                                                             
Holen:10 http://de.archive.ubuntu.com/ubuntu artful-updates/main amd64 linux-libc-dev amd64 4.13.0-25.29 [963 kB]                                                                       
Holen:11 http://de.archive.ubuntu.com/ubuntu artful-updates/main amd64 intel-microcode amd64 3.20180108.0~ubuntu17.10.1 [1.090 kB]                                                       Holen:12 http://de.archive.ubuntu.com/ubuntu artful-updates/main amd64 libruby2.3 amd64 2.3.3-1ubuntu1.2 [2.972 kB]                                                                     
Holen:13 http://de.archive.ubuntu.com/ubuntu artful-updates/main amd64 ruby2.3 amd64 2.3.3-1ubuntu1.2 [41,0 kB]                                                                         
Es wurden 30,5 MB in 25 s geholt (1.186 kB/s).                                                                                                                                          
(Lese Datenbank ... 391417 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../00-libwebkit2gtk-4.0-37-gtk2_2.18.5-0ubuntu0.17.10.1_amd64.deb ...
Entpacken von libwebkit2gtk-4.0-37-gtk2:amd64 (2.18.5-0ubuntu0.17.10.1) über (2.18.4-0ubuntu0.17.10.1) ...
Vorbereitung zum Entpacken von .../01-libwebkit2gtk-4.0-37_2.18.5-0ubuntu0.17.10.1_amd64.deb ...
Entpacken von libwebkit2gtk-4.0-37:amd64 (2.18.5-0ubuntu0.17.10.1) über (2.18.4-0ubuntu0.17.10.1) ...
Vorbereitung zum Entpacken von .../02-libjavascriptcoregtk-4.0-18_2.18.5-0ubuntu0.17.10.1_amd64.deb ...
Entpacken von libjavascriptcoregtk-4.0-18:amd64 (2.18.5-0ubuntu0.17.10.1) über (2.18.4-0ubuntu0.17.10.1) ...
Vorbereitung zum Entpacken von .../03-gir1.2-webkit2-4.0_2.18.5-0ubuntu0.17.10.1_amd64.deb ...
Entpacken von gir1.2-webkit2-4.0:amd64 (2.18.5-0ubuntu0.17.10.1) über (2.18.4-0ubuntu0.17.10.1) ...
Vorbereitung zum Entpacken von .../04-gir1.2-javascriptcoregtk-4.0_2.18.5-0ubuntu0.17.10.1_amd64.deb ...
Entpacken von gir1.2-javascriptcoregtk-4.0:amd64 (2.18.5-0ubuntu0.17.10.1) über (2.18.4-0ubuntu0.17.10.1) ...
Vorbereitung zum Entpacken von .../05-poppler-utils_0.57.0-2ubuntu4.2_amd64.deb ...
Entpacken von poppler-utils (0.57.0-2ubuntu4.2) über (0.57.0-2ubuntu4.1) ...
Vorbereitung zum Entpacken von .../06-libpoppler-glib8_0.57.0-2ubuntu4.2_amd64.deb ...
Entpacken von libpoppler-glib8:amd64 (0.57.0-2ubuntu4.2) über (0.57.0-2ubuntu4.1) ...
Vorbereitung zum Entpacken von .../07-libpoppler68_0.57.0-2ubuntu4.2_amd64.deb ...
Entpacken von libpoppler68:amd64 (0.57.0-2ubuntu4.2) über (0.57.0-2ubuntu4.1) ...
Vorbereitung zum Entpacken von .../08-gir1.2-poppler-0.18_0.57.0-2ubuntu4.2_amd64.deb ...
Entpacken von gir1.2-poppler-0.18:amd64 (0.57.0-2ubuntu4.2) über (0.57.0-2ubuntu4.1) ...
Vorbereitung zum Entpacken von .../09-linux-libc-dev_4.13.0-25.29_amd64.deb ...
Entpacken von linux-libc-dev:amd64 (4.13.0-25.29) über (4.13.0-21.24) ...
Vorbereitung zum Entpacken von .../10-intel-microcode_3.20180108.0~ubuntu17.10.1_amd64.deb ...
Entpacken von intel-microcode (3.20180108.0~ubuntu17.10.1) über (3.20170707.1) ...
Vorbereitung zum Entpacken von .../11-libruby2.3_2.3.3-1ubuntu1.2_amd64.deb ...
Entpacken von libruby2.3:amd64 (2.3.3-1ubuntu1.2) über (2.3.3-1ubuntu1.1) ...
Vorbereitung zum Entpacken von .../12-ruby2.3_2.3.3-1ubuntu1.2_amd64.deb ...
Entpacken von ruby2.3 (2.3.3-1ubuntu1.2) über (2.3.3-1ubuntu1.1) ...
intel-microcode (3.20180108.0~ubuntu17.10.1) wird eingerichtet ...update-initramfs: deferring update (trigger activated)
intel-microcode: microcode will be updated at next boot
linux-libc-dev:amd64 (4.13.0-25.29) wird eingerichtet ...
gir1.2-javascriptcoregtk-4.0:amd64 (2.18.5-0ubuntu0.17.10.1) wird eingerichtet ...
Trigger für libc-bin (2.26-0ubuntu2) werden verarbeitet ...
Trigger für man-db (2.7.6.1-2) werden verarbeitet ...
libjavascriptcoregtk-4.0-18:amd64 (2.18.5-0ubuntu0.17.10.1) wird eingerichtet ...
libruby2.3:amd64 (2.3.3-1ubuntu1.2) wird eingerichtet ...
libpoppler68:amd64 (0.57.0-2ubuntu4.2) wird eingerichtet ...
libpoppler-glib8:amd64 (0.57.0-2ubuntu4.2) wird eingerichtet ...
poppler-utils (0.57.0-2ubuntu4.2) wird eingerichtet ...
libwebkit2gtk-4.0-37:amd64 (2.18.5-0ubuntu0.17.10.1) wird eingerichtet ...
libwebkit2gtk-4.0-37-gtk2:amd64 (2.18.5-0ubuntu0.17.10.1) wird eingerichtet ...
gir1.2-poppler-0.18:amd64 (0.57.0-2ubuntu4.2) wird eingerichtet ...
ruby2.3 (2.3.3-1ubuntu1.2) wird eingerichtet ...
gir1.2-webkit2-4.0:amd64 (2.18.5-0ubuntu0.17.10.1) wird eingerichtet ...
Trigger für initramfs-tools (0.125ubuntu12) werden verarbeitet ...
update-initramfs: Generating /boot/initrd.img-4.13.0-21-generic
Trigger für libc-bin (2.26-0ubuntu2) werden verarbeitet ...


So note the "intel-microcode" package, which states:
intel-microcode: microcode will be updated at next boot

And after the reboot:
schroff@zerberus:~$ dmesg | grep microcode
[    0.000000] microcode: microcode updated early to revision 0xc2, date = 2017-11-16
[    1.400728] microcode: sig=0x406e3, pf=0x40, revision=0xc2
[    1.401060] microcode: Microcode Update Driver: v2.2.

Global temporary table clears on commit

Tom Kyte - Fri, 2018-01-12 22:26
Hi, Please, help me to understand - whats happenging?: I've GTT with ON COMMIT PRESERVE ROWS and after inserting values with procedure: " insert into gtt ... select ...; commit; " next sql " select * from gtt " returns nothing!!! but,...
Categories: DBA Blogs

Fast wild-card searching

Tom Kyte - Fri, 2018-01-12 22:26
What is the best way to implement a solution to the following problem so that wild-card searching is very fast. PROBLEM: Two column tabular data with ~100 million rows of the form given below. Searching is on the first column. The number of sear...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator