Feed aggregator

Virtualization on Windows 10 with Virtual Box, Hyper-V and Docker Containers

Amis Blog - Mon, 2017-07-17 16:17

Recently I started working on a brand new HP ZBook 15-G3 with Windows 10 Pro. And I immediately tried to return to the state I had my previous Windows 7 laptop in: Oracle Virtual Box for running most software in virtual machines, using Docker Machine (and Kubernetes) for running some things in Docker Containers and using Vagrant to spin up some of these containers and VMs.

I quickly ran into some issues that made me reconsider – and realize that some things are different on Windows 10. In this article a brief summary of my explorations and findings.

  • Docker for Windows provides near native support for running Docker Containers; the fact that under the covers there is still a Linux VM running is almost hidden and from command line (Powershell) and a GUI I have easy access to the containers. I do  not believe though that I can run containers that expose a GUI – except through a VNC client
  • Docker for Windows leverages Hyper-V. Hyper-V lets you run an operating system or computer system as a virtual machine on Windows. (Hyper-V is built into Windows as an optional feature; it needs to be explicitly enabled) Hyper-V on Windows is very similar to VirtualBox
  • In order to use Hyper-V or Virtual Box, hardware virtualization must be enabled in the system’s BIOS
  • And the one finding that took longest to realize: Virtual Box will not work if Hyper-V is enabled. So the system at any one time can only run Virtual Box or Hyper-V (and Docker for Windows), not both. Switching Hyper-V support on and off is fairly easy, but it does require a reboot

Quick tour of Windows Hyper-V

Creating a virtual machine is very easy. A good example is provided in this article: https://blog.couchbase.com/hyper-v-run-ubuntu-linux-windows/ that describes how a Hyper-V virtual machine is created with Ubuntu Linux.

I went through the following steps to create a Hyper-V VM running Fedora 26. It was easy enough. However, the result is not as good in terms of the GUI experience as I had hoped it would be. Some of my issues: low resolution, only 4:3 aspect ratio, I cannot get out of full screen mode (that requires CTRL-ALT-BREAK and my keyboard does not have a break key. All alternative I have found do not work for me.

    • Download ISO image for Fedora 26 (Fedora-Workstation-Live-x86_64-26-1.5.iso using Fedora Media Writer or from https://fedora.mirror.wearetriple.com/linux/releases/26/Workstation/x86_64/iso/)
    • Enable Virtualization in BIOS
    • Enable Hyper-V (First, open Control Panel. Next, go to Programs. Then, click “Turn Windows features on or off”. Finally, locate Hyper-V and click the checkbox (if it isn’t already checked))
    • Run Hyper-V Manager – click on search, type Hype… and click on Hype-V Manager
      image
    • Create Virtual Switch – a Network Adapter that will allow the Virtual Machine to communicate to the world
      image
    • Create Virtual Machine – specify name, size and location of virtual hard disk (well, real enough inside he VM, virtual on your host), size of memory, select the network switch (created in the previous step), specify the operating system and the ISO while where it will be installed from
      image
    • Start the virtual machine and connect to it. It will boot and allow you to run through the installation procedure
    • Potentially change the screen resolution used in the VM. That is not so simple: see this article for an instruction: https://www.netometer.com/blog/?p=1663 Note: this is one of the reasons why I am not yet a fan of Hyper-V
    • Restart the VM an connect to it; (note: you may have to eject the ISO file from the virtual DVD player, as otherwise the machine could boot again from the ISO image instead of the now properly installed (virtual) hard disk
      image

References

Article that explains how to create a Hyper-V virtual machine that runs Ubuntu (including desktop): https://blog.couchbase.com/hyper-v-run-ubuntu-linux-windows/ 

Microsoft article on how to use local resources (USB, Printer) inside Hyper-V virtual machine: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/learn-more/Use-local-resources-on-Hyper-V-virtual-machine-with-VMConnect 

Microsoft documentation: introduction of Hypervisor Hyper-v on Windows 10: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/about/

Two article on converting Virtual Box VM images to Hyper-V: https://cloudbase.it/convert-virtualbox-to-hyper-v/ and (better) https://www.groovypost.com/howto/migrate-virtual-box-vms-windows-10-hyper-v/

And: how to create one’s own PC into a Hyper-V VM: http://www.online-tech-tips.com/free-software-downloads/convert-pc-into-virtual-machine/

Rapid intro to Docker on Windows

Getting going with Docker on Windows is surprisingly simple and pleasant. Just install Docker for Windows (see for example article for instructions: https://www.htpcbeginner.com/install-docker-on-windows-10/ ). Make sure that Hyper-V is enabled – because Docker for Windows leverages Hyper-V to run a Linux VM: the MobyLinuxVM that you see the details for in the next figure.

SNAGHTMLcb5d37 

At this point you can interact with Docker from the Powershell command line – simply type docker ps, docker run, docker build and other docker commands on your command line. To just run containers based on images – local or in public or private registries – you can use the Docker GUI Kitematic. It is a separate install action – – that is largely automated as is described here  https://www.htpcbeginner.com/install-kitematic-on-windows/ –to get Kitematic installed. That is well worth the extremely small trouble it is.

image

From Kitematic, you have a graphical overview of your containers as well as an interactive UI for starting containers, configuring them, inspecting them and interacting with them. All things you can do from the command line – but so much simpler.

image

In this example, I have started a container based on the ubuntu-xfce-nvc image (see https://hub.docker.com/r/consol/ubuntu-xfce-vnc/) which runs the Ubuntu Linux distribution with “headless” VNC session, Xfce4 UI and preinstalled Firefox and Chrome browser.

image

The Kitematic IP & Ports tab specify that port 5901 – the VNC port – is mapped to port 32769 on the host (my Windows 10 laptop). I can run the MobaXterm tool and open a VNC session with it, fir 127.0.0.1 at port 32769. This allows me to remotely (or at least outside of the container) see the GUI for the Ubuntu desktop:

image

Even though it looks okay and it is pretty cool that I can graphically interact with the container, it is not a very good visual experience – especially when things start to move around. Docker for Windows is really best for headless programs that run in the background.

For quickly trying out Docker images and for running containers in the background – for example with a MongoDB database, an Elastic Search Index and a Node.JS or nginx web server – this seems to be a very usable way of working.

References

Introducing Docker for Windows: https://docs.docker.com/docker-for-windows/ Documentation

Download Docker for Windows Community Edition: https://www.docker.com/community-edition#/download

Article on installation for Kitematic – the GUI for Docker for Windows: https://www.htpcbeginner.com/install-kitematic-on-windows/ 

Download MobaXterm: http://mobaxterm.mobatek.net/ 

Virtual Box on Windows 10

My first impressions on Virtual Box compared to Hyper-V that for now at least I far prefer Virtual Box(for running Linux VMs).The support for shared folders between host and guest, the high resolution GUI for the Guest, and the fact that currently many prebuilt images are available for Virtual Box and not so many (or hardly any) for Hyper-V are for now points in favor of Virtual Box. I never run VMs with Windows as Guest OS, I am sure that would impact my choice.

Note- once more- that for VirtualBox to run on Windows 10, you need to make sure that hardware virtualization is enabled in BIOS and that Hyper-V is not enabled. Failing to take care of either of these two will return the same error VT-x is not available (VERR_VMX_NO_VMX):

image

Here is a screenshot of a prebuilt VM image running on Virtual Box on Windows 10 – all out of the box.

image

No special set up required. It uses the full screen, it can interact with the host, is clipboard enabled, I can easily toggle between guest and host and it has good resolution and reasonable responsiveness:

image

Resources

Article describing setting up two boot profiles for Windows 10 – one for Hyper-V and one without it (for example run Virtual Box): https://marcofranssen.nl/switch-between-hyper-v-and-virtualbox-on-windows/

Article that explains how to create a Hyper-V virtual machine that runs Ubuntu (including desktop): https://blog.couchbase.com/hyper-v-run-ubuntu-linux-windows/ 

Microsoft article on how to use local resources (USB, Printer) inside Hyper-V virtual machine: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/learn-more/Use-local-resources-on-Hyper-V-virtual-machine-with-VMConnect 

Microsoft documentation: introduction of Hypervisor Hyper-v on Windows 10: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/about/

HP Forum Entry on enabling Virtualization in BIOS fo ZBook G2 : https://h30434.www3.hp.com/t5/Business-Notebooks/Enable-hardware-virtualization-on-HP-ZBOOK-15-G2/td-p/5513726 

Introducing Docker for Windows: https://docs.docker.com/docker-for-windows/ Documentation

Download Docker for Windows Community Edition: https://www.docker.com/community-edition#/download

Article on installation for Kitematic – the GUI for Docker for Windows: https://www.htpcbeginner.com/install-kitematic-on-windows/ 

Two article on converting Virtual Box VM images to Hyper-V: https://cloudbase.it/convert-virtualbox-to-hyper-v/ and (better) https://www.groovypost.com/howto/migrate-virtual-box-vms-windows-10-hyper-v/

And: how to create one’s own PC into a Hyper-V VM: http://www.online-tech-tips.com/free-software-downloads/convert-pc-into-virtual-machine/

The post Virtualization on Windows 10 with Virtual Box, Hyper-V and Docker Containers appeared first on AMIS Oracle and Java Blog.

Check Your EBS Database Configuration with Database Parameter Settings Analyzer

Steven Chan - Mon, 2017-07-17 15:31

Database ParaIn addition to helping customers resolve issues via Service Requests, Oracle Support also builds over 60 free diagnostic tools for Oracle E-Business Suite 12.2, 12.0, 12.1, and 11i. These Support Analyzers are non-invasive scripts that run health-checks on your EBS environments. They look for common issues and generate standardized reports summarizing that provide solutions for known issues and recommendations on best practices.

Here's an index to these tools:

Spotlight on Database Parameter Settings Analyzer

We publish a definitive list of Oracle Database initialization parameter settings for the optimal performance of Oracle E-Business Suite 12.2, 12.1, and 12.0:  

This document is updated regularly; for example, it was recently updated to account for changes introduced by the April 2017 updates to the AD and TXK utilities.

It can be challenging to keep up with those changes simply by scanning Note 396009.1 yourself.  You can automate this process by using the Database Parameter Settings Analyzer:

The Database Parameter Settings Analyzer compares your database's parameter settings to the latest recommendations in Note 396009.1.  It reports on any differences, and makes recommendations about your sga_target, shared_pool_size, shared_pool_reserved_size and processes parameters based upon the number of active users for your environment.  

This tool can be run manually or configured to run as a concurrent request, so it can be scheduled to be run periodically and included in regular database maintenance cycles.

Can this script be run against Production?

Yes. There is no DML in the Analyzer Script, so it is safe to run against Production instances to get an analysis of the environment for a specific instance. As always it is recommended to test all suggestions against a TEST instance before applying to Production.

Related Articles

Categories: APPS Blogs

Video: Making RESTful Web Services the Easy Way with Node.js | Dan McGhan

OTN TechBlog - Mon, 2017-07-17 13:28

Drivers make it easy to connect to and run statements against a database. That means they're perfect for creating RESTful APIs, right? You'll want to add some pagination capabilities, maybe sorting controls, and perhaps some generic filtering options. You could do all that with the driver and some smart code, but is there an easier way?  In this video replay of Dan McGhan's session from the Full Stack Web track in the recent Oracle Code Online event, you'll learn about some of the challenges associated with manual API creation using drivers, and about several tools that offer similar functionality out of the box, including Loopback, Sails, and Oracle REST Data Services. Watch the video!

Related Resources

Video: Taming the Asynchronous Nature of Node.js

Node.js Community Space

Video: Implementing Node.js in the Enterprise

Mocha.js for Test Automation of Node.js REST API on Oracle Developer Cloud Service

Analyzing Wimbledon Twitter Feeds in Real Time with Kafka, Presto and Oracle DVD v3

Rittman Mead Consulting - Mon, 2017-07-17 09:09
Analyzing Wimbledon Twitter Feeds in Real Time with Kafka, Presto and Oracle DVD v3

Last week there was Wimbledon, if you are a fan of Federer, Nadal or Djokovic then it was one of the events not to be missed. I deliberately excluded Andy Murray from the list above since he kicked out my favourite player: Dustin Brown.

Analyzing Wimbledon Twitter Feeds in Real Time with Kafka, Presto and Oracle DVD v3

Two weeks ago I was at Kscope17 and one of the common themes, which reflected where the industry is going, was the usage of Kafka as central hub for all data pipelines. I wont go in detail on what's the specific role of Kafka and how it accomplishes, You can grab the idea from two slides taken from a recent presentation by Confluent.

Analyzing Wimbledon Twitter Feeds in Real Time with Kafka, Presto and Oracle DVD v3

One of the key points of all Kafka-related discussions at Kscope was that Kafka is widely used to take data from providers and push it to specific data-stores (like HDFS) that are then queried by analytical tools. However the "parking to data-store" step can sometimes be omitted with analytical tools querying directly Kafka for real-time analytics.

Analyzing Wimbledon Twitter Feeds in Real Time with Kafka, Presto and Oracle DVD v3

We wrote at the beginning of the year a blog post about doing it with Spark Streaming and Python however that setup was more data-scientist oriented and didn't provide a simple ANSI SQL familiar to the beloved end-users.

As usual, Oracle annouced a new release during Kscope. This year it was Oracle Data Visualization Desktop 12.2.3.0.0 with a bunch of new features covered in my previous blog post.
The enhancement, amongst others, that made my day was the support for JDBC and ODBC drivers. It opened a whole bundle of opportunities to query tools not officially supported by DVD but that expose those type of connectors.

One of the tools that fits in this category is Presto, a distributed query engine belonging to the same family of Impala and Drill commonly referred as sql-on-Hadoop. A big plus of this tool, compared to the other two mentioned above, is that it queries natively Kafka via a dedicated connector.

I found then a way of fitting the two of the main Kscope17 topics, a new sql-on-Hadoop tool and one of my favourite sports (Tennis) in the same blog post: analysing real time Twitter Feeds with Kafka, Presto and Oracle DVD v3. Not bad as idea.... let's check if it works...

Analysing Twitter Feeds

Let's start from the actual fun: analysing the tweets! We can navigate to the Oracle Analytics Store and download some interesting add-ins we'll use: the Auto Refresh plugin that enables the refresh of the DV project, the Heat Map and Circle Pack visualizations and the Term Frequency advanced analytics pack.

Importing the plugin and new visualizations can be done directly in the console as explained in my previous post. In order to be able to use the advanced analytics function we need to unzip the related file and move the .xml file contained in the %INSTALL_DIR%\OracleBI1\bifoundation\advanced_analytics\script_repository. In the Advanced Analytics zip file there is also a .dva project that we can import into DVD (password Admin123) which gives us a hint on how to use the function.

We can now build a DVD Project about the Wimbledon gentleman singular final containing:

  • A table view showing the latest tweets
  • A horizontal bar chart showing the number of tweets containing mentions to Federer, Cilic or Both
  • A circle view showing the most tweeted terms
  • A heatmap showing tweet locations (only for tweets with an activated localization)
  • A line chart showing the number of tweets over time

The project is automatically refreshed using the auto-refresh plugin mentioned above. A quick view of the result is provided by the following image.

Analyzing Wimbledon Twitter Feeds in Real Time with Kafka, Presto and Oracle DVD v3

So far all good and simple! Now it's time to go back and check how the data is collected and queried. Let's start from Step #1: pushing Twitter data to Kafka!

Kafka

We covered Kafka installation and setup in previous blog post, so I'll not repeat this part.
The only piece I want to mention, since gave me troubles, is the advertised.host.name setting: it's a configuration line in /opt/kafka*/config/server.properties that tells Kafka which is the host where it's listening.

If you leave the default localhost and try to push content to a topic from an external machine it will not show up, so as pre-requisite change it to a hostname/IP that can be resolved externally.

The rest of the Kafka setup is the creation of a Twitter producer, I took this Java project as example and changed it to use the latest Kafka release available in Maven. It allowed me to create a Kafka topic named rm.wimbledon storing tweets containing the word Wimbledon.

The same output could be achieved using Kafka Connect and its sink and source for twitter. Kafka Connect has also the benefit of being able to transform the data before landing it in Kafka making the data parsing easier and the storage faster to retrieve. I'll cover the usage of Kafka Connect in a future post, for more informations about it, check this presentation from Robin Moffatt of Confluent.

One final note about Kafka: I run a command to limit the retention to few minutes

bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic rm.wimbledon --config retention.ms=300000  

This limits the amount of data that is kept in Kafka, providing better performances during query time. This is not always possible in Kafka due to data collection needs and there are other ways of optimizing the query if necessary.

At this point of our project we have a dataflow from Twitter to Kafka, but no known way of querying it with DVD. It's time to introduce the query engine: Presto!

Presto

Presto was developed at Facebook, is in the family of sql-on-Hadoop tools. However, as per Apache Drill, it could be called sql-on-everything since data don't need to reside on an Hadoop system. Presto can query local file systems, MongoDB, Hive, and a big variety of datasources.

As the other sql-on-Hadoop technologies it works with always-on daemons which avoid the latency proper of Hive in starting a MapReduce job. Presto, differently from the others, divides the daemons in two types: the Coordinator and the Worker. A Coordinator is a node that receives the query from the clients, it analyses and plans the execution which is then passed on to Workers to carry out.

In other tools like Impala and Drill every node by default could add as both worker and receiver. The same can also happen in Presto but is not the default and the documentation suggest to dedicate a single machine to only perform coordination tasks for best performance in large cluster (reference to the doc).

The following image, taken from Presto website, explains the flow in case of usage of the Hive metastore as datasource.

Analyzing Wimbledon Twitter Feeds in Real Time with Kafka, Presto and Oracle DVD v3

Installation

The default Presto installation procedure is pretty simple and can be found in the official documentation. We just need to download the presto-server-0.180.tar.gz tarball and unpack it.

tar -xvf presto-server-0.180.tar.gz  

This creates a folder named presto-server-0.180 which is the installation directory, the next step is to create a subfolder named etc which contains the configuration settings.

Then we need to create five configuration files and a folder within the etc folder:

  • node.environment: configuration specific to each node, enables the configuration of a cluster
  • jvm.config: options for the Java Virtual Machine
  • config.properties: specific coordinator/worker settings
  • log.properties: specifies log levels
  • catalog: a folder that will contain the data source definition

For a the basic functionality we need the following are the configurations:

node.environment
node.environment=production  
node.id=ffffffff-ffff-ffff-ffff-ffffffffffff  
node.data-dir=/var/presto/data  

With the environment parameter being shared across all the nodes in the cluster, the id being a unique identifier of the node and the data-dir the location where Presto will store logs and data.

jvm.config
-server
-Xmx4G
-XX:+UseG1GC
-XX:G1HeapRegionSize=32M
-XX:+UseGCOverheadLimit
-XX:+ExplicitGCInvokesConcurrent
-XX:+HeapDumpOnOutOfMemoryError
-XX:+ExitOnOutOfMemoryError

I reduced the -Xmx parameter to 4GB as I'm running in a test VM. The parameters can of course be changed as needed.

config.properties

Since we want to keep it simple we'll create a unique node acting both as coordinator and as worker, the related config file is:

coordinator=true  
node-scheduler.include-coordinator=true  
http-server.http.port=8080  
query.max-memory=5GB  
query.max-memory-per-node=1GB  
discovery-server.enabled=true  
discovery.uri=http://linuxsrv.local.com:8080  

Where the coordinator=true tells Presto to function as coordinator, http-server.http.port defines the ports, and discovery.uri is the URI to the Discovery server (in this case the same process).

log.properties
com.facebook.presto=INFO  

We can keep the default INFO level, other levels are DEBUG, WARN and ERROR.

catalog

The last step in the configuration is the datasource setting: we need to create a folder named catalog within etc and create a file for each connection we intend to use.

For the purpose of this post we want to connect to the Kafka topic named rm.wimbledon. We need to create a file named kafka.properties within the catalog folder created above. The file contains the following lines

connector.name=kafka  
kafka.nodes=linuxsrv.local.com:9092  
kafka.table-names=rm.wimbledon  
kafka.hide-internal-columns=false  

where kafka.nodes points to the Kafka brokers and kafka.table-names defines the list of topics delimited by a ,.

The last bit needed is to start the Presto server by executing

bin/launcher start  

We can append the --verbose parameter to debug the installation with logs that can be found in the var/log folder.

Presto Command Line Client

In order to query Presto via command line interface we just need to download the associated client (see official doc) which is in the form of a presto-cli-0.180-executable.jar file. We can now rename the file to presto and make it executable.

mv presto-cli-0.180-executable.jar presto  
chmod +x presto  

Then we can start the client by executing

./presto --server linuxsrv.local.com:8080 --catalog kafka --schema rm

Remember that the client has a JDK 1.8 as prerequisite, otherwise you will face an error. Once the client is successfully setup, we can start querying Kafka

You could notice that the schema (rm) we're connecting is just the prefix of the rm.wimbledon topic used in kafka. In this way I could potentially store other topics using the same rm prefix and being able to query them all together.

We can check which schemas can be used in Kafka with

presto:rm> show schemas;  
       Schema       
--------------------
 information_schema 
 rm                 
(2 rows)

We can also check which topics are contained in rm schema by executing

presto:rm> show tables;  
   Table   
-----------
 wimbledon 
(1 row)

or change schema by executing

use information_schema;  

Going back to the Wimbledon example we can describe the content of the topic by executing

presto:rm> describe wimbledon;  
      Column       |  Type   | Extra |                   Comment                   
-------------------+---------+-------+---------------------------------------------
 _partition_id     | bigint  |       | Partition Id                                
 _partition_offset | bigint  |       | Offset for the message within the partition 
 _segment_start    | bigint  |       | Segment start offset                        
 _segment_end      | bigint  |       | Segment end offset                          
 _segment_count    | bigint  |       | Running message count per segment           
 _key              | varchar |       | Key text                                    
 _key_corrupt      | boolean |       | Key data is corrupt                         
 _key_length       | bigint  |       | Total number of key bytes                   
 _message          | varchar |       | Message text                                
 _message_corrupt  | boolean |       | Message data is corrupt                     
 _message_length   | bigint  |       | Total number of message bytes               
(11 rows)

We can immediately start querying it like

presto:rm> select count(*) from wimbledon;  
 _col0 
-------
 42295 
(1 row)

Query 20170713_102300_00023_5achx, FINISHED, 1 node  
Splits: 18 total, 18 done (100.00%)  
0:00 [27 rows, 195KB] [157 rows/s, 1.11MB/s]  

Remember all the queries are going against Kafka in real time, so the more messages we push, the more results we'll have available. Let's now check what the messages looks like

presto:rm> SELECT _message FROM wimbledon LIMIT 5;

-----------------------------------------------------------------------------------------------------------------------------------------------------------------
 {"created_at":"Thu Jul 13 10:22:46 +0000 2017","id":885444381767081984,"id_str":"885444381767081984","text":"RT @paganrunes: Ian McKellen e Maggie Smith a Wimbl

 {"created_at":"Thu Jul 13 10:22:46 +0000 2017","id":885444381913882626,"id_str":"885444381913882626","text":"@tomasberdych spricht vor dem @Wimbledon-Halbfinal 

 {"created_at":"Thu Jul 13 10:22:47 +0000 2017","id":885444388645740548,"id_str":"885444388645740548","text":"RT @_JamieMac_: Sir Andrew Murray is NOT amused wit

 {"created_at":"Thu Jul 13 10:22:49 +0000 2017","id":885444394404503553,"id_str":"885444394404503553","text":"RT @IBM_UK_news: What does it take to be a #Wimbled

 {"created_at":"Thu Jul 13 10:22:50 +0000 2017","id":885444398929989632,"id_str":"885444398929989632","text":"RT @PakkaTollywood: Roger Federer Into Semifinals \

(5 rows)

As expected tweets are stored in JSON format, We can now use the Presto JSON functions to extract the relevant informations from it. In the following we're extracting the user.name part of every tweet. Node the LIMIT 10 (common among all the SQL-on-Hadoop technologies) to limit the number of rows returned.

presto:rm> SELECT json_extract_scalar(_message, '$.user.name') FROM wimbledon LIMIT 10;  
        _col0        
---------------------
 pietre --           
 BLICK Sport         
 Neens               
 Hugh Leonard        
 ••••Teju KaLion•••• 
 Charlie Murray      
 Alex                
 The Daft Duck.      
 Hotstar             
 Raj Singh Chandel   
(10 rows)

We can also create summaries like the top 10 users by number of tweets.

presto:rm> SELECT json_extract_scalar(_message, '$.user.name') as screen_name, count(json_extract_scalar(_message, '$.id')) as nr FROM wimbledon GROUP BY json_extract_scalar(_message, '$.user.name') ORDER BY count(json_extract_scalar(_message, '$.id')) desc LIMIT 10;  
     screen_name     | nr  
---------------------+-----
 Evarie Balan        | 125 
 The Master Mind     | 104 
 Oracle Betting      |  98 
 Nichole             |  85 
 The K - Man         |  75 
 Kaciekulasekran     |  73 
 vientrainera        |  72 
 Deporte Esp         |  66 
 Lucas Mc Corquodale |  64 
 Amal                |  60 
(10 rows)
Adding a Description file

We saw above that it's possible to query with ANSI SQL statements using the Presto JSON function. The next step will be to define a structure on top of the data stored in the Kafka topic to turn raw data in a table format. We can achieve this by writing a topic description file. The file must be in json format and stored under the etc/kafka folder; it is recommended, but not necessary, that the name of the file matches the kafka topic (in our case rm.wimbledon). The file in our case would be the following

{
    "tableName": "wimbledon",
    "schemaName": "rm",
    "topicName": "rm.wimbledon",
    "key": {
        "dataFormat": "raw",
        "fields": [
            {
                "name": "kafka_key",
                "dataFormat": "LONG",
                "type": "BIGINT",
                "hidden": "false"
            }
        ]
    },
    "message": {
        "dataFormat": "json",
        "fields": [
            {
                "name": "created_at",
                "mapping": "created_at",
                "type": "TIMESTAMP",
                "dataFormat": "rfc2822"
            },
            {
                "name": "tweet_id",
                "mapping": "id",
                "type": "BIGINT"
            },
            {
                "name": "tweet_text",
                "mapping": "text",
                "type": "VARCHAR"
            },
            {
                "name": "user_id",
                "mapping": "user/id",
                "type": "VARCHAR"
            },
            {
                "name": "user_name",
                "mapping": "user/name",
                "type": "VARCHAR"
            },
            [...]
        ]
    }
}

After restarting Presto when we execute the DESCRIBE operation we can see all the fields available.

presto:rm> describe wimbledon;  
      Column       |   Type    | Extra |                   Comment                   
-------------------+-----------+-------+---------------------------------------------
 kafka_key         | bigint    |       |                                             
 created_at        | timestamp |       |                                             
 tweet_id          | bigint    |       |                                             
 tweet_text        | varchar   |       |                                             
 user_id           | varchar   |       |                                             
 user_name         | varchar   |       |                                             
 user_screenname   | varchar   |       |                                             
 user_location     | varchar   |       |                                             
 user_followers    | bigint    |       |                                             
 user_time_zone    | varchar   |       |                                             
 _partition_id     | bigint    |       | Partition Id                                
 _partition_offset | bigint    |       | Offset for the message within the partition 
 _segment_start    | bigint    |       | Segment start offset                        
 _segment_end      | bigint    |       | Segment end offset                          
 _segment_count    | bigint    |       | Running message count per segment           
 _key              | varchar   |       | Key text                                    
 _key_corrupt      | boolean   |       | Key data is corrupt                         
 _key_length       | bigint    |       | Total number of key bytes                   
 _message          | varchar   |       | Message text                                
 _message_corrupt  | boolean   |       | Message data is corrupt                     
 _message_length   | bigint    |       | Total number of message bytes               
(21 rows)

Now I can use the newly defined columns in my query

presto:rm> select created_at, user_name, tweet_text from wimbledon LIMIT 10;  

and the related results

Analyzing Wimbledon Twitter Feeds in Real Time with Kafka, Presto and Oracle DVD v3

We can always mix defined columns with custom JSON parsing Presto syntax if we need to extract some other fields.

select created_at, user_name, json_extract_scalar(_message, '$.user.default_profile') from wimbledon LIMIT 10;  
Oracle Data Visualization Desktop

As mentioned at the beginning of the article, the overall goal was to analyse Wimbledon twitter feed in real time with Oracle Data Visualization Desktop via JDBC, so let's complete the picture!

JDBC drivers

First step is to download the Presto JDBC drivers version 0.175, I found them in the Maven website. I tried also the 0.180 version downloadable directly from Presto website but I had several errors in the connection.
After downloading we need to copy the driver presto-jdbc-0.175.jar under the %INSTALL_DIR%\lib folder where %INSTALL_DIR% is the Oracle DVD installation folder and start DVD. Then I just need to create a new connection like the following

Analyzing Wimbledon Twitter Feeds in Real Time with Kafka, Presto and Oracle DVD v3

Note that:

  • URL: includes also the /kafka postfix, this tells Presto which storage I want to query
  • Driver Class Name: this setting puzzled me a little bit, I was able to discover the string (with the help of Gianni Ceresa) by concatenating the folder name and the driver class name after unpacking the jar file

Analyzing Wimbledon Twitter Feeds in Real Time with Kafka, Presto and Oracle DVD v3

** Username/password: those strings can be anything since for the basic test we didn't setup any security on Presto.

The whole JDBC process setting is described in this youtube video provided by Oracle.

We can then define the source by just selecting the columns we want to import and create few additional ones like the Lat and Long parsing from the coordinates column which is in the form [Lat, Long]. The dataset is now ready to be analysed as we saw at the beginning of the article, with the final result being:

Analyzing Wimbledon Twitter Feeds in Real Time with Kafka, Presto and Oracle DVD v3

Conclusions

As we can see from the above picture the whole process works (phew....), however it has some limitations: there is no pushdown of functions to the source so most of the queries we see against Presto are in the form of

select tweet_text, tweet_id, user_name, created_at from (  
select coordinates,  
 coordinates_lat_long,
 created_at,
 tweet_id,
 tweet_text,
 user_followers,
 user_id,
 user_location,
 user_name,
 user_screenname,
 user_time_zone
from rm.wimbledon)  

This means that the whole dataset is retrieved every time making this solution far from optimal for big volumes of data. In those cases probably the "parking" to datastore step would be necessary. Another limitation is related to the transformations, the Lat and Long extractions from coordinates field along with other columns transformations are done directly in DVD, meaning that the formula is applied directly in the visualization phase. In the second post we'll see how the source parsing phase and query performances can be enhanced using Kafka Connect, the framework allowing an easy integration between Kafka and other sources or sinks.

One last word: winning Wimbledon eight times, fourteen years after the first victory and five years after the last one it's something impressive! Chapeau mr Federer!

Categories: BI & Warehousing

Duke Kunshan University Selects Oracle to Optimize Student Experience

Oracle Press Releases - Mon, 2017-07-17 07:00
Press Release
Duke Kunshan University Selects Oracle to Optimize Student Experience Oracle Student Cloud chosen to help innovative Liberal Arts & Research University in China attract, enroll and retain students

Redwood Shores, Calif.—Jul 17, 2017

Duke Kunshan University, a highly innovative joint venture of Duke University, Wuhan University and the City of Kunshan, China, has selected Oracle Student Cloud to support student recruitment, enrollment and engagement. Oracle Student Cloud will help Duke Kunshan University maintain operational excellence and student success by using technology to help modernize and optimize the student experience.

Established in 2014 and located in Kunshan, China, Duke Kunshan University offers a range of academic programs for students from China and throughout the world. To support its goal of becoming a world-class liberal arts and research university, Duke Kunshan University wanted to ensure that every student received a unique, all-inclusive experience from onboarding to graduation. To deliver a personalized and seamless experience across channels and devices, Duke Kunshan University selected Oracle Student Cloud.

“We knew that creating a new world-class university focused on liberal arts and research in China was going to be a challenge and a distinct opportunity for higher education innovation, and we want our students to be prepared and also acquainted with the latest technologies,” said Denis Simon, Executive Vice Chancellor at Duke Kunshan. “Duke Kunshan University will provide an intimate and forward-looking educational experience, from enrollment to graduation, for all of our students. Oracle Student Cloud is a student-centric solution that supports flexible learning models and helps us meet the increasing demands of today’s economy and diverse student body.”

Duke Kunshan University plans to use Oracle Student Cloud, including Student Engagement and Student Recruiting, to recruit highly qualified students, drive program success and help reduce costs. Student Engagement and Student Recruiting are designed to enable Duke Kunshan University to analyze the student journey and help meet its admission objectives. In addition, the university will be able to operationalize its undergraduate program through the implementation of Oracle’s Campus Solutions student information system.

"By combining deep domain knowledge that has been established over a quarter of a century with a modern and secure cloud, Oracle is helping more than 800 higher education institutions around the world drive success,” said Vivian Wong, group vice president, Higher Education Development at Oracle. “Oracle’s next-generation cloud platform for Higher Education is designed to modernize education institutions, accelerate insight, and empower students via an intuitive user experience, embedded analytics, and built-in collaboration delivering comprehensive support for the entire student lifecycle.”

Additional Information

Oracle Student Cloud is a comprehensive solution focused on managing the student lifecycle and promoting collaborative relationships with the goal of student success. The solution benefits institutions by enabling them to anticipate students’ needs, illuminating their academic path and empowering students to succeed. 

Duke Kunshan University selected global professional services firm Huron to integrate the cloud solution.

“Higher education leaders understand that the benefits of cloud technology go beyond efficiency gains to support their goals of attracting, retaining and graduating quality students,” said Steve Kish, Managing Director, Huron. “By providing students a seamless experience, including remote access to secure data with any type of device, forward-looking institutions like Duke Kunshan University are enhancing the student experience and student success."

 
Contact Info
Jennifer Yamamoto
Oracle
+1.916.761.9555
jennifer.yamamoto@oracle.com

About Duke Kunshan University

Duke Kunshan University is an innovative partnership between Duke University, Wuhan University, and the City of Kunshan to create a world-class liberal arts and research university offering a range of academic programs for students from China and around the world. A non-profit, joint-venture institution, Duke Kunshan was granted accreditation approval by China’s Ministry of Education in September 2013 and welcomed its inaugural class of students in August 2014. In August 2018, Duke Kunshan University will welcome the first students in its four-year undergraduate degree program. Currently, Duke Kunshan offers four master programs in medical physics, global health, environmental policy and management studies, which grant Duke degrees to its graduates. An undergraduate Global Learning Semester program offers a semester-long learning experience to undergraduate students currently enrolled at other Chinese and international universities.

The Duke Kunshan University campus is located in Kunshan, Jiangsu province, China.  Located in close proximity to both Shanghai and Suzhou and connected to both by high-speed rail, the city of Kunshan is a center for business, high-tech research and advanced manufacturing and has one of the fastest growing local economies in China.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

 
Talk to a Press Contact

Jennifer Yamamoto

  • +1.916.761.9555

acroread on Linux 6

Vikram Das - Sun, 2017-07-16 13:53
As per https://access.redhat.com/errata/RHSA-2013:1402 : Adobe Reader (acroread) allows users to view and print documents in Portable Document Format (PDF). Adobe Reader 9 reached the end of its support cycle on June 26, 2013, and will not receive any more security updates. Future versions of Adobe Acrobat Reader will not be available with Red Hat Enterprise Linux.

Some of our ERPs still use acroread for pasta printing.  When we did the upgrade of OS to OEL 6, we had to reinstall adobe by getting the rpm from adobe site ftp://ftp.adobe.com/pub/adobe/reader/unix/9.x/9.5.5/enu/ that I got from https://www.reddit.com/r/linux/comments/2hsgq6/linux_version_of_adobe_reader_no_longer/   Fortunately, adobe still hosts it. It is also available from Red Hat if you have a subscription:

Red Hat Enterprise Linux Server 6SRPMx86_64acroread-9.5.5-1.el6_4.1.i686.rpmSHA-256: ac0934cfe887c6f49238ebff2c963adc33c1531019660bbfa4f8852d725136a2Downloadacroread-plugin-9.5.5-1.el6_4.1.i686.rpmSHA-256: 8116e41e4825f74478731a7e970482274db5c9d4b9489b9f70ca4e1a278e526bDownloadi386acroread-9.5.5-1.el6_4.1.i686.rpmSHA-256: ac0934cfe887c6f49238ebff2c963adc33c1531019660bbfa4f8852d725136a2Downloadacroread-plugin-9.5.5-1.el6_4.1.i686.rpmSHA-256: 8116e41e4825f74478731a7e970482274db5c9d4b9489b9f70ca4e1a278e526bDownload

After installing the rpm with rpm -ivh AdbeRdr9.5.5-1_i486linux_enu.rpm command, we got dependency error for libxml2.so.  On installing libxml2.so, we got error for other missing dependencies.  So eventually we did yum install AdbeRdr9.5.5-1_i486linux_enu.rpm which installed about 80 rpms that acroread needs to function.

After installation, we kept getting this error:

dirname: missing operand

The post on http://www.linuxquestions.org/questions/slackware-14/error-message-dirname-missing-operand-when-starting-acroread-827012/ recommended a fix:

The issue was fixed by commenting line 529 in the bash script /opt/Adobe/Reader/bin/acrobat

529 #  [ -n "${MozPath}" ] || mozillaPath="`readlink "$MozPath" | xargs dirname`"

The DBAs were getting this error when they used sudo to login as applmgr:

Adobe Reader does not need to be run as a privileged user. Please remove 'sudo' from the beginning of the command

The post on https://forums.freebsd.org/threads/17345/ recommended that we comment line 331,332,333 and 334:

# Do not allow launch using 'sudo'.
#if [ "${ACRO_ALLOW_SUDO+set}" != "set" -a \( "${SUDO_USER+set}" = "set" -o "${SUDO_UID+set}" = "set" -o "${SUDO_GID}" = "set" \) ]; then
# printf "%s\n" "Adobe Reader does not need to be run as a privileged user. Please remove 'sudo' from the beginning of the command."
# exit 1
#fi

That fixed the sudo issue.

I have asked the DBA team to reach out to the Developers to stop using acroread and migrate to an alternative that is natively supported in Linux.  Here's more from Red Hat on their security advisory https://access.redhat.com/errata/RHSA-2013:1402 :

Red Hat advises users to reconsider further use of Adobe Reader for Linux,
as it may contain known, unpatched security issues. Alternative PDF
rendering software, such as Evince and KPDF (part of the kdegraphics
package) in Red Hat Enterprise Linux 5, or Evince and Okular (part of the
kdegraphics package) in Red Hat Enterprise Linux 6, should be
considered. These packages will continue to receive security fixes.
Red Hat will no longer provide security updates to these packages and
recommends that customers not use this application on Red Hat Enterprise
Linux effective immediately.
Categories: APPS Blogs

ADF BC - Create View Object From Query with Custom Implementation Class

Andrejus Baranovski - Sun, 2017-07-16 12:18
I had a request to explain how to create dynamic ADF BC VO from SQL statement and set custom VO implementation class for newly created VO instance. Custom VO implementation class extends from ADF BC ViewObjectImpl and overrides super method:


There is a method createViewObjectFromQueryStmt, in previous ADF versions this method had two parameters - VO instance name and SQL statement. In current ADF 12c - there is a second signature of the same method, which contains option to specify VO implementation class name. Dynamic VO from SQL with VO implementation class:


ADF BC custom methods can be tested with ADF BC tester:


Overridden method from custom VO implementation class is called:


Download sample application - ADFVOFromSQLApp.zip.

Auto suggest with HTML5 Data List in Vue.js 2 application

Amis Blog - Sun, 2017-07-16 06:11

This article shows data (News stories) retrieved from a public REST API (https://newsapi.org) in a nice and simple yet attractive Vue.js 2 application. In the example, the user selects a news source using a dropdown select component.

image

I was wondering how hard – or easy – it would be to replace the select component with an input component with associated data list – a fairly new HTML5 addition that is rendered as a free format entry field with associated list of suggestions based on the input. In the case of the sample News List application, this component renders like this:

image

and this if the user has typed “on”

image

To change the behavior of the SourceSelection component in the sample, I first clone the original source repository from GitHub.  I then focus only on the file SourceSelection.vue in the components directory.

I have added the <datalist> tag with the dynamic creation of <option> elements in the same way as in the original <select> element. With one notable change: with the select component, we have both the display label and the underlying value. With datalist, we have only one value to associate with each option – the display label.

The input element is easily associated with the datalist, using the list attribute. The input element supports the placeholder attribute that allows us to present an initial text to the end user. The input element is two-way databound to property source on the component. Additionally, the input event – which fires after each change in the value of the input element – is associated with a listener method on the component, called sourceChanged.

I make a distinction now between the source property – which is bound to value in the input field – and the deepSource property which holds the currently selected news source object (with name, id and url). In function sourceChanged() the new value of source is inspected. If it differs from the currently selected deepSource, then we try to find this new value of source in the array of news sources. If we find it, we set that news source as the new deepsource – and publish the event sourceChanged.

The full code for the SourceSelection.vue file is here from https://gist.github.com/lucasjellema/1c92c052d3a278ee27d17cfa3ea3b54a:

The post Auto suggest with HTML5 Data List in Vue.js 2 application appeared first on AMIS Oracle and Java Blog.

First encounters of a happy kind – rich web client application development with Vue.js

Amis Blog - Sun, 2017-07-16 01:39

Development of rich web applications can be done in various ways, using one or more of many frameworks. In the end it all boils down to HTML(5), CSS and JavaScript, run and interpreted by the browser. But the exact way of getting there differs. Server side oriented Web applications with .NET and Java EE (Servlet, JSP, JSF) and also PHP, Python and Ruby has long been the most important way of architecting web applications. However, with the power of today’s browsers, the advanced state of HTML5 and JavaScript and the high degree of standardization across browsers, it is now almost goes without saying that web applications are implemented with a rich client side that interacts with a backend to a very limited degree and typically only to retrieve or pass data or enlist external services and complex backend operations. What client/server did to terminal based computing in the early nineties, the fat browser is doing now to three tier web computing with its heavy focus on the server side.

The most prominent frameworks for developing these fat browser based clients are Angular and Angular 2, React.js, Ember, complemented by jQuery and a plethora of other libraries, components and frameworks (see for example this list of top 9 frameworks) . And then there is Vue.js. To be honest, I am not sure where Vue ranks in all the trends and StackOverflow comparisons etc. However, I did decide to take a quick look at Vue.js – and I liked what I saw.

From the Vue website:

Vue (pronounced /vjuː/, like view) is a progressive framework for building user interfaces. Unlike other monolithic frameworks, Vue is designed from the ground up to be incrementally adoptable. The core library is focused on the view layer only, and is very easy to pick up and integrate with other libraries or existing projects. On the other hand, Vue is also perfectly capable of powering sophisticated Single-Page Applications when used in combination with modern tooling and supporting libraries.

I have never really taken to Angular. It felt overly complex and I never particularly liked it. Perhaps I should give it another go – now that my understanding of modern web development has evolved. Maybe now I am finally ready for it. Instead, I checked out Vue.js and it made me more than a little happy. I smiled as I read through the introductory guide, because it made sense. The pieces fit together. I understand the purpose of the main moving pieces and I enjoy trying them out. The two way data binding is fun. The encapsulation of components, passing down properties, passing up events – I like that too. The HTML syntax, the use of templates, the close fit with “standard” HTML. It somehow agrees with me.

Note: it is still early days and I have not yet built a serious application with Vue. But I thought I should share some of my excitement.

The creator of Vue, Evan You, Vue.js ( http://evanyou.me/ ), writes about Vue’s origins:

I started Vue as a personal project when I was working at Google Creative Labs in 2013. My job there involved building a lot of UI prototypes. After hand-rolling many of them with vanilla JavaScript and using Angular 1 for a few, I wanted something that captured the declarative nature of Angular’s data binding, but with a simpler, more approachable API. That’s how Vue started.

And that is what appealed to me.

The first thing I did to get started with Vue.js was to read through the Introductory Guide for Vue.js 2.0: https://vuejs.org/v2/guide/ .

Component Tree

It is a succinct tour and explanation, starting at the basics and quickly coming round to the interesting challenges. Most examples in the guide work in line – and using the Google Chrome Addin for Vue.js it is even easier to inspect what is going on in the runtime application.

The easiest way to try out Vue.js (at its simplest) is using the JSFiddle Hello World example.

Next, I read through and followed the example of a more interesting Vue application in this article that shows data (News stories) retrieved from a public REST API (https://newsapi.org):

This example explains in a very enjoyable way how two components are created – news source selection and news story list from selected source- as encapsulated, independent components that still work together. Both components interact with the REST API to fetch their data. The article starts with an instruction on how to install the Vue command line tool and initialize a new project with a generated scaffold. If Node and NPM are already installed, you will be up and running with the hello world of Vue applications in less than 5 minutes.

Vue and Oracle JET

One other line of investigation is how Vue.js can be used in an Oracle JET application, to complement and perhaps even replace KnockOut. More on that:

The post First encounters of a happy kind – rich web client application development with Vue.js appeared first on AMIS Oracle and Java Blog.

Running any Node application on Oracle Container Cloud Servicer

Amis Blog - Sun, 2017-07-16 00:32

In an earlier article, I discussed the creation of a generic Docker Container Image that runs any Node.JS application based on sources for that application on GitHub. When the container is started, the GitHub URL is passed in as a parameter and the container will download the sources and run the application. Using this generic image, you can your Node application everywhere you can run a Docker container. One of the places where you can run a Docker Container is the Oracle Container Cloud Service (OCCS) – a service that offers a platform for managing your container landscape. In this article, I will show how I used OCCS to run my generic Docker image for running Node application and how I configured the service to run a specific Node application from GitHub.

Getting started with OCCS is described very well in an article by my colleague Luc Gorissen on this same blog: Docker, WebLogic Image on Oracle Container Cloud Service. I used his article to get started myself.

The steps are:

  • create OCCS Service instance
  • configure OCCS instance (with Docker container image registry)
  • Create a Service for the desired container image (the generic Node application runner) – this includes configuring the Docker container parameters such as port mapping and environment variables
  • Deploy the Service (run a container instance)
  • Check the deployment (status, logs, assigned public IP)
  • Test the deployment – check if the Node application is indeed available

 

Create OCCS Service instance

Assuming you have an Oracle Public Cloud account with a subscription to OCCS. Go to the Dashboard for OCCS. Click on Create Service

image

Configure the service instance:

 

image

However, do not make it too small (!) (Oracle Cloud does not come in small portions):

image

So now with the minimum allowed data volume size (for a stateless container!)

image

This time I pass the validations:

image

And the Container Cloud Service instance is  created:

image

 

Configure OCCS instance (with Docker container image registry)

After some time, when the instance is ready, I can access it:

image

image

It is pretty sizable as you can see.

Let’s access the Container console.

image

image

The Dashboard gives an overview of the current status, the actual deployments (none yet) and access to Services, Stacks, Containers, Images and more.

image

One of the first things to do, is to configure a (Container Image) Registry – for example a local registry or an account on Docker Hub – my account, where I have saved container images that I need to create containers from in the Oracle Container Cloud:

image

My details are validated:

image

The registry is added:

image

 

Create a Service for a desired container image

Services are container images along with configuration to be used for running containers. Oracle Container Cloud comes with a number of popular container images already configured as services. I want to add another service, for my own image: the generic Node application runner). For this I select the image from my Docker Hub account followed by configuring the Docker container parameters such as port mapping and environment variables

image

The Service editor – the form to define the Image (from one of the configured registries), the name of the service (which represents the combination of the image with a set of configuration settings to make it into a specific service) and of course those configuration settings – port mappings, environment variables, volumes etc.

image

Note: I am creating a service for the image that can run any Node application that is available in GitHub (as described here: https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/ )

Deploy the Service (run a container instance)

After the service was created, it is now available as the blueprint to run new containers from. This dis done through a Deployment – this ties together a Service with a some runtime settings around scaling, load balancing and the like:

image

Set the deployment details for the new deployment of this service:

image

After completing these details, press deploy to go ahead and run the new deployment; in this case it consists of a single instance (boring….) but it could have been more involved.

image

The deployment is still starting.

A little later (a few seconds) the container is running:

image

Check some details:

image

To check the deployment (status, logs, assigned IP), click on the container name:

image

Anything written to the console inside the container is accessible from the Logs:

image

 

To learn about the public IP address at which the application is exposed, we need to turn to the Hosts tab.

Monitor Hosts

image

Drill down on one specific host:

image

and learn its public IP address, where we can access the application running in the deployed container.

Test the deployment – check if the Node application is indeed available

With the host’s public IP address and the knowledge that port 8080 inside the container (remember environment variable APP_PORT that was defined as 8080 to pass to the generic Node application running) is mapped to port 8005 externally, we can now invoke the application running inside the container deployed on the Container Cloud Service from our local browser.

 

image

 

And there is the output of the application (I never said it would be spectacular…)

image

 

Conclusion

After having gotten used to the sequence of actions:

  • configure registry (probably only once)
  • configure a service (for every container image plus specific setup of configuration parameters, including typical Docker container settings such as port mapping, volumes, environment variables)
  • define and run a deployment (from a service) with scaling factor and other deployment details
  • get hold of host public IP address to access the application in the container

Oracle Container Cloud Service provides a very smooth experience that compares favorably with other Container Cloud Services and management environments I have seen. From a Developer’s perspective at least, OCCS does a great job. It is a little too early to say much about the Ops side of things – how operations with OCCS are.

The post Running any Node application on Oracle Container Cloud Servicer appeared first on AMIS Oracle and Java Blog.

OpenJDK 9: Jshell - Using Swing / doing GUI stuff

Dietrich Schroff - Sat, 2017-07-15 13:40
After the buitin commands of jshell and how to load and save scripts i tried to get swing components running.

I created a script  HelloWorld.java
import javax.swing.*;       

JFrame frame = new JFrame("HelloWorldSwing");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
JLabel label = new JLabel("Hello World");
frame.getContentPane().add(label);
frame.pack();
frame.setVisible(true);
But this does not work with Ubuntu:
jshell HelloWorld.java
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007f61041bb009, pid=7440, tid=7442
#
# JRE version: OpenJDK Runtime Environment (9.0) (build 9-internal+0-2016-04-14-195246.buildd.src)
# Java VM: OpenJDK 64-Bit Server VM (9-internal+0-2016-04-14-195246.buildd.src, mixed mode, tiered, compressed oops, g1 gc, linux-amd64)
# Problematic frame:
# C  [libjava.so+0x1d009]  JNU_GetEnv+0x19
#
# Core dump will be written. Default location: Core dumps may be processed with "/usr/share/apport/apport %p %s %c %P" (or dumping to /home/schroff/core.7440)
#
# An error report file with more information is saved as:
# /home/schroff/hs_err_pid7440.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
|  State engine terminated.
|  Restore definitions with: /reload restore
|  Resetting...
|  Welcome to JShell -- Version 9-internal
|  For an introduction type: /help intro
Hmmm. This does look good.

On a Windows 10 host it works without any problem:


I hope this bug will be fixed soon...
EDIT: After downloading a new version from java.net it worked:

./java -version
java version "9"
Java(TM) SE Runtime Environment (build 9+178)
Java HotSpot(TM) 64-Bit Server VM (build 9+178, mixed mode)


Dashboards For Banking

Nilesh Jethwa - Sat, 2017-07-15 09:30

The integration of huge financial data, time sensitivity and security restrictions is an extremely complex process. In this regard, the banking sector has found dashboards to be useful.

There’s no doubt your own institution grapples with humungous sets of data. Because of the large volumes of data and the manual treatment given to it, it is highly likely that you cannot make sense of these data in a way that you can fully take advantage of them.

Given that as the case, a performance metrics dashboard can fill the gap. By making available data easily analyzable and findings utilizable, you can gain insights of how well your processes are doing. Dashboard snap shots can provide a lot of information helpful in decision-making.

What can a banking dashboard can provide?

Here is a quick look on what information a banking dashboard can provide:

  • A dashboard can give you information on the performance of your new products and effectiveness of your pricing policies.
  • With it, you can also gain insights on performance problems. In this regard, you can make use of the dashboard’s drill-down performance.
  • The dashboard will enable you to view of real-time operational information. Specifically, the dashboard will give you information on credit risk, operation risk, sales team performance, and service utilization.
  • With this tool, you can do analysis on profitability and perform margin analysis.
  • The dashboard will allow you to do away with periodic reporting. This is because the tool will give you information on a day-to-day basis.
  • Lastly, this tool can give you vital information of the past and present performance of your institution. What’s more is that it will give you possible scenarios for your bank in the future.

Learn more at http://www.infocaptor.com/dashboard/performance-metrics-and-reporting-dashboards-for-banking

Documentum – Change password – 2 – CS – dm_bof_registry

Yann Neuhaus - Sat, 2017-07-15 03:30

When installing a Global Registry on a Content Server, you will be asked to setup the BOF username and password. The name of this user is by default “dm_bof_registry” so even if you can change it, I will use this value in this blog. This is one of the important accounts that are being created inside the Global Registry. So, what would be the needed steps to change the password of this account?

 

Let’s start with the simple part: changing the password of the account in the Global Registry. For this, I will use iapi below but you can do the same thing using Documentum Administrator, idql, dqMan or anything else that works. First, let’s login on the Content Server, switch to the Installation Owner’s account and start with defining an environment variable that will contain the NEW password to be used:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the dm_bof_registry password: " bof_pwd; echo
Please enter the dm_bof_registry password:
[dmadmin@content_server_01 ~]$

 

Once that is done, we can now execute the iapi commands below to update the password for the dm_bof_registry account. As there is a local trust on the Content Server with the Installation Owner, I don’t need to enter the password, so I use “xxx” instead to login to the Global Registry (GR_DOCBASE). Execute the commands below one after the other and don’t include the “> ” characters, just past the iapi commands and after pasting the final EOF, an iapi session will be opened and all commands will be executed, like that:

[dmadmin@content_server_01 ~]$ iapi GR_DOCBASE -Udmadmin -Pxxx << EOF
> retrieve,c,dm_user where user_login_name='dm_bof_registry'
> set,c,l,user_password
> $bof_pwd
> save,c,l
> EOF


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000905 started for user dmadmin."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> ...
110f123456000144
API> SET> ...
OK
API> ...
OK
API> Bye
[dmadmin@content_server_01 ~]$

 

Then to verify that the password has been set properly in the Global Registry, we can try to login with the dm_bof_registry account:

[dmadmin@content_server_01 ~]$ echo quit | iapi GR_DOCBASE -Udm_bof_registry -P$bof_pwd


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000906 started for user dm_bof_registry."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@content_server_01 ~]$

 

If the password has been changed properly, the output will be similar to the one above: a session will be opened and the only command executed will be “quit” which will close the iapi session automatically. That was pretty easy, right? Well that’s clearly not all there is to do to change the BOF password, unfortunately…

 

The “problem” with the dm_bof_registry account is that it is used on all DFC Clients to register them, to establish trust, aso… Therefore, if you change the password of this account, you will need to reflect this change on all clients that are connecting to your Content Servers. In the steps below, I will provide some commands that can be used to do that on the different typical DFC clients (JMS, xPlore, DA, D2, …). If I’m not talking about one of your DFC client, then basically the steps are always the same, it’s just the commands that differs:

  • Listing all dfc.keystore
  • Updating the dfc.properties
  • Removing/renaming the dfc.keystore files
  • Restarting the DFC clients
  • Checking that the dfc.keystore files have been recreated

 

Before going through the different DFC Clients, you first need to encrypt the BOF user’s password because it is always be used in its encrypted form, so let’s encrypt it on a Content Server:

[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -cp $DOCUMENTUM_SHARED/dfc/dfc.jar com.documentum.fc.tools.RegistryPasswordUtils ${bof_pwd}
AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0
[dmadmin@content_server_01 ~]$

 

I generated a random string for this example (“AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0″) but this will be the encrypted password of our user. I will use this value in the commands below so whenever you see this, just replace it with what your “java -cp ..” command returned.

 

I. Content Server

On the Content Server, the main dfc client is the JMS. You will have one dfc.properties for each JMS application, one global for the CS, aso… So, let’s update all that with a few commands only. Normally you should only get the definition of the dfc.globalregistry.password in the file $DOCUMENTUM_SHARED/config/dfc.properties. If you got this definition elsewhere, you should maybe consider using the “#include” statement to avoid duplicating the definitions…

[dmadmin@content_server_01 ~]$ for i in `find $DOCUMENTUM_SHARED -type f -name "dfc.keystore"`; do ls -l ${i}; done
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' $DOCUMENTUM_SHARED/config/dfc.properties
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `find $DOCUMENTUM_SHARED -type f -name "dfc.keystore"`; do ls -l ${i}; mv "${i}" "${i}_bck_$(date "+%Y%m%d")"; done
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ $DOCUMENTUM_SHARED/jboss7.1.1/server/stopMethodServer.sh
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ nohup $DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh >> $DOCUMENTUM_SHARED/jboss7.1.1/server/nohup-JMS.out 2>&1 &
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `find $DOCUMENTUM_SHARED -type f -name "dfc.keystore"`; do ls -l ${i}; done

 

If you do it properly, all the dfc.keystore files will be recreated with the restart and you can verify that by comparing the output of the first and last commands.

 

II. WebLogic Server

In this part, I will assume a WebLogic Server is used for the D2, D2-Config and DA applications. If you are using Tomcat instead, then just adapt the path. Below I will use:

  • $WLS_APPLICATIONS as the directory where all the Application WAR files are present. If you are using exploded applications (it’s just a folder, not a WAR file) OR if you are using an external dfc.properties file (it’s possible even with a WAR file to extract the dfc.properties for it), then the “jar -xvf” and “jar -uvf” commands aren’t needed.
  • $WLS_APPS_DATA as the directory where the Application Data are present (Application log files, dfc.keystore, cache, …)

 

These two folders might be the same depending on how you configured your Application Server. All I’m doing below is just updating the dfc.properties files for D2, D2-Config and DA in order to use the new encrypted password.

[weblogic@weblogic_server_01 ~]$ for i in `find $WLS_APPS_DATA -type f -name "dfc.keystore"`; do ls -l ${i}; done
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ cd $WLS_APPLICATIONS/
[weblogic@weblogic_server_01 ~]$ jar -xvf D2.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ jar -uvf D2.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ jar -xvf D2-Config.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ jar -uvf D2-Config.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ jar -xvf da.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ jar -uvf da.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ for i in `find $WLS_APPS_DATA -type f -name "dfc.keystore"`; do ls -l ${i}; mv "${i}" "${i}_bck_$(date "+%Y%m%d")"; done

 

Once done, the next steps depend, again, on how you configured your Application Server. If you are using WAR files, you will need to redeploy them. If not, you might have to restart your Application Server for the change to be taken into account and for the keystore file to be re-created.

 

III. Full Text Server

On the Full Text Server, it’s again the same stuff but for all Index Agents this time.

[xplore@xplore_server_01 ~]$ for i in `find $XPLORE_HOME -type f -name "dfc.keystore"`; do ls -l ${i}; done
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ for i in `ls $XPLORE_HOME/jboss7.1.1/server/DctmServer_*/deployments/IndexAgent.war/WEB-INF/classes/dfc.properties`; do sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' ${i}; done
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ for i in `find $XPLORE_HOME -type f -name "dfc.keystore"`; do ls -l ${i}; mv "${i}" "${i}_bck_$(date "+%Y%m%d")"; done
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ service xplore stop
[xplore@xplore_server_01 ~]$ service xplore start
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ for i in `find $XPLORE_HOME -type f -name "dfc.keystore"`; do ls -l ${i}; done

 

Again if you do it properly, all the dfc.keystore files will be recreated with the restart.

 

When everything has been done, just let the environment up&running for some time and check the logs for authentication failures regarding the dm_bof_registry user. As you saw above, changing the dm_bof_registry password isn’t really complicated but it’s quite redundant and time consuming so better script all this! :)

 

 

 

Cet article Documentum – Change password – 2 – CS – dm_bof_registry est apparu en premier sur Blog dbi services.

Documentum – Change password – 1 – CS – AEK and Lockbox

Yann Neuhaus - Sat, 2017-07-15 02:53

This blog is the first one of a series that I will publish in the next few days/weeks regarding how you can change some passwords in Documentum. In these blogs, I will talk about a lot of accounts like the Installation Owner of course, the Preset and Preferences accounts, the JKS passwords, the JBoss Admin passwords, the xPlore xDB passwords, aso…

 

So, let’s dig in with the first ones: AEK and Lockbox passphrases. In this blog, I will only talk about the Content Server lockbox, it’s not about the D2 Lockbox (which is also under the JMS). I’m assuming here that the AEK key is stored in the Content Server lockbox as it is recommended starting with CS 7.2 for security reasons.

 

In this blog, I will use “dmadmin” as the Installation Owner. First, you need to connect to all Content Servers of this environment using the Installation Owner account. In case you have a High Availability environment, then you will need to do this on all Content Servers, obviously.

 

Then, I’m defining some environment variables so I’m sure I’m using the right passphrases and there is no typo in the commands. The first two commands below will be used to store the CURRENT and NEW passphrases for the AEK. The last two commands are for the Lockbox. When you execute the “read” command, the prompt isn’t returned. Just past the passphrase (it’s hidden) and press enter. Then the prompt is returned and the passphrase is stored in the environment variable. I’m describing this in this blog only. In the next blogs, I will just use the commands without explanation:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the CURRENT AEK passphrase: " c_aek_pp; echo
Please enter the CURRENT AEK passphrase:
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW AEK passphrase: " n_aek_pp; echo
Please enter the NEW AEK passphrase:
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the CURRENT lockbox passphrase: " c_lb_pp; echo
Please enter the CURRENT lockbox passphrase:
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW lockbox passphrase: " n_lb_pp; echo
Please enter the NEW lockbox passphrase:
[dmadmin@content_server_01 ~]$

 

Maybe a small backup of the Lockbox, just in case…:

[dmadmin@content_server_01 ~]$ cp -R $DOCUMENTUM/dba/secure $DOCUMENTUM/dba/secure_bck_$(date "+%Y%m%d-%H%M")
[dmadmin@content_server_01 ~]$

 

Ok then to ensure that the commands will go smoothly, let’s just verify that the environments variables are defined properly (I’m adding “__” at the end of the echo commands to be sure there is no “space” at the end of the passwords). Obviously the “read -s” commands above have been executed to hide the passphrases so if you don’t want the passphrases to be stored in the history of the shell, don’t execute the two commands below.

[dmadmin@content_server_01 ~]$ echo "CURRENT_AEK_PP=${c_aek_pp}__"; echo "NEW_AEK_PP=${n_aek_pp}__"
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ echo "CURRENT_LOCKBOX_PP=${c_lb_pp}__"; echo "NEW_LOCKBOX_PP=${n_lb_pp}__"
[dmadmin@content_server_01 ~]$

 

To verify that the CURRENT AEK and Lockbox passphrases are correct, you can execute the following commands. Just a note, when you first create the Lockbox, the Documentum Installer will ask you which algorithm you want to use… I always choose the stronger one for security reasons so I’m using below “AES_256_CBC”. If you are using something else, just adapt it:

[dmadmin@content_server_01 ~]$ dm_crypto_boot -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -passphrase ${c_aek_pp} -all

Please wait. This will take a few seconds ...

Please wait, this will take a few seconds..
Setting up the (single) passphrase for all keys in the shared memory region..
Operation succeeded
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -resetfingerprint
Lockbox lockbox.lb
Lockbox Path /app/dctm/server/dba/secure/lockbox.lb
Reset host done
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ dm_crypto_create -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -keyname CSaek -passphrase ${c_aek_pp} -algorithm AES_256_CBC -check


Key - CSaek uses algorithm AES_256_CBC.

** An AEK store with the given passphrase exists in lockbox lockbox.lb and got status code returned as '0'.

 

For the three commands above, the result should always be “Operation succeeded”, “Reset host done” and “got status code returned as ‘0’”. If the second command fail, then obviously, it’s the Lockbox passphrase that isn’t set properly and otherwise it’s the AEK passphrase.

 

Ok now that all the passwords are set and that the current ones are working, we can start the update of the passphrases. Let’s first start with the AEK:

[dmadmin@content_server_01 ~]$ dm_crypto_change_passphrase -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -keyname CSaek -passphrase ${c_aek_pp} -newpassphrase ${n_aek_pp}
[dmadmin@content_server_01 ~]$

 

Then the Lockbox:

[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -changepassphrase -newpassphrase ${n_lb_pp}
[dmadmin@content_server_01 ~]$

 

To verify that the NEW passphrases are now used, you can again run the three above commands. The only difference is that you need to use the environment variables for the NEW passphrases and not the CURRENT (old) ones:

[dmadmin@content_server_01 ~]$ dm_crypto_boot -lockbox lockbox.lb -lockboxpassphrase ${n_lb_pp} -passphrase ${n_aek_pp} -all
[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase ${n_lb_pp} -resetfingerprint
[dmadmin@content_server_01 ~]$ dm_crypto_create -lockbox lockbox.lb -lockboxpassphrase ${n_lb_pp} -keyname CSaek -passphrase ${n_aek_pp} -algorithm AES_256_CBC -check

 

Now we are almost complete. If the three previous commands gave the correct output, then it’s pretty sure that everything is OK. Nevertheless, and to be 100% sure that the Content Server Lockbox isn’t corrupted in some way, it is always good to reboot the Linux Host too. Once the Linux Host is up & running again, you will have to execute the first command above (the dm_crypto_boot) to store the Lockbox information into the Shared Memory so that the docbase(s) can start. If you are able to start the docbase(s) using the NEW passphrases, then the AEK and Lockbox have been updated successfully!

 

As a side note, if the Server Fingerprint has been updated (like some OS patching recently or stuff like that), then you might need to execute the second command too (dm_crypto_manage_lockbox) as well as regenerate the D2 Lockbox (which isn’t described in this blog but will be in a next one).

 

 

 

Cet article Documentum – Change password – 1 – CS – AEK and Lockbox est apparu en premier sur Blog dbi services.

TNS-12543: TNS:destination host unreachable

Amardeep Sidhu - Fri, 2017-07-14 23:53

Scenario : Setting up a physical standby from Exadata to a non-Exadata single instance. tnsping from standby to primary works fine but tnsping from primary to standby fails with:

TNS-12543: TNS:destination host unreachable

I am able to ssh standby from primary, can ping as well but tnsping doesn’t work.  From the error description we can figure out that something is blocking the access. In this case it was iptables that was enabled on the standby server.

Stopping the service resolved the issue.

service iptables stop
chkconfig iptables off

The error is an obvious one but sometimes it just doesn’t strike you that it could be something simple like that.

Categories: BI & Warehousing

How to Create Missing Records with Analytical Functions

Tom Kyte - Fri, 2017-07-14 20:26
Hi AskTom Team, I am having some trouble figuring out a query to do the following: I have a staging table populated by an external system. This table stores information about how much an item sold during a day. If an item hasn't sold anything d...
Categories: DBA Blogs

PostgreSQL Inheritance in Oracle

Tom Kyte - Fri, 2017-07-14 20:26
Hi Tom, How do I implement inheritance this way in oracle? create table requests(); create table requests_new() inherits (requests); create table requests_old() inherits (requests); I should be able to query the child tables independentl...
Categories: DBA Blogs

help

Tom Kyte - Fri, 2017-07-14 20:26
For which constraint does the Oracle Server implicitly create a unique index? a)PRIMARY KEY b)NOT NULL c)FOREIGN KEY d)CHECK Which tablespace can NOT be recovered with the database open? a)USERS b)TOOLS c)DATA d)SYSTEM ...
Categories: DBA Blogs

Query Returns via SQL*Plus - but not via ODP.net Driver

Tom Kyte - Fri, 2017-07-14 20:26
We have a database with some partitioned tables (main table by value, the children by reference). We have a query that includes a function call in the where clause. <code>Select bunch_of_columns, package.function(parameter) as column18 from ta...
Categories: DBA Blogs

SQL*Loader save filename into table column

Tom Kyte - Fri, 2017-07-14 20:26
I need to import different csv-files into 1 table. I need to use the sqlloader.(Oracle Version 12.1.0.2) This is my control-file: <code>load data append into table SAMP_TABLE fields terminated by ',' OPTIONALLY ENCLOSED BY '"' AND '"' traili...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator