Feed aggregator

Very simple oracle package for HTTPS and HTTP

XTended Oracle SQL - Thu, 2015-10-08 19:54

I don’t like to import certificates, so i cannot use httpuritype for HTTPS pages and I decided to create package which will work with https as http.
It was pretty easy with java stored procedures :)
github/XT_HTTP

java source: xt_http.jsp
create or replace and compile java source named xt_http as
package org.orasql.xt_http;

import javax.net.ssl.HttpsURLConnection;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.HttpURLConnection;

import java.sql.Connection;
import oracle.jdbc.driver.*;
import oracle.sql.CLOB;
 

public class XT_HTTP {

   /**
    * Function getPage
    * @param String Page URL
    * @return String
    */
    public static CLOB getPage(java.lang.String sURL)
    throws java.sql.SQLException
     {
        OracleDriver driver = new OracleDriver();
        Connection conn     = driver.defaultConnection();
        CLOB result         = CLOB.createTemporary(conn, false, CLOB.DURATION_CALL);
        result.setString(1," ");
        try {
            URL url = new URL(sURL);
            HttpURLConnection con = (HttpURLConnection)url.openConnection();
            //HttpsURLConnection con = (HttpsURLConnection)url.openConnection();
            if(con!=null){
                BufferedReader br =
                        new BufferedReader(
                                new InputStreamReader(con.getInputStream()));
                StringBuilder sb = new StringBuilder();
                String line;
                while ((line = br.readLine()) != null){
                    sb.append(line);
                }
                br.close();
                result.setString(1,sb.toString());
            }
        } catch (MalformedURLException e) {
            result.setString(1, e.getMessage());
        } catch (IOException e) {
            result.setString(1, e.getMessage());
        }
        return result;
    }
    
    public static java.lang.String getString(java.lang.String sURL) {
        String result="";
        try {
            URL url = new URL(sURL);
            HttpURLConnection con = (HttpURLConnection)url.openConnection();
            if(con!=null){
                BufferedReader br =
                        new BufferedReader(
                                new InputStreamReader(con.getInputStream()));
                StringBuilder sb = new StringBuilder();
                String line;
                while ((line = br.readLine()) != null){
                    sb.append(line);
                }
                br.close();
                result = sb.toString().substring(0,3999);
            }
        } catch (MalformedURLException e) {
            return e.getMessage();
        } catch (IOException e) {
            return e.getMessage();
        }
        return result;
    }
}
/

[collapse]

package xt_http
create or replace package XT_HTTP is
/**
 * Get page as CLOB
 */
  function get_page(pURL varchar2)
    return clob
    IS LANGUAGE JAVA
    name 'org.orasql.xt_http.XT_HTTP.getPage(java.lang.String) return oracle.sql.CLOB';

/**
 * Get page as varchar2(max=4000 chars)
 */
  function get_string(pURL varchar2)
    return varchar2
    IS LANGUAGE JAVA
    name 'org.orasql.xt_http.XT_HTTP.getString(java.lang.String) return java.lang.String';
    
end XT_HTTP;
/

[collapse]

We have to grant connection permissions:

dbms_java.grant_permission(
   grantee           => 'XTENDER'                       -- username
 , permission_type   => 'SYS:java.net.SocketPermission' -- connection permission
 , permission_name   => 'ya.ru:443'                     -- connection address and port
 , permission_action => 'connect,resolve'               -- types
);

And now we can easily get any page:

USAGE example:
declare
  c clob;
  s varchar2(8000);
begin
  --- Through HTTPS as CLOB:
  c:=xt_http.get_page('https://google.com');

  --- Through HTTP as CLOB
  c:=xt_http.get_page('http://ya.ru');
  
  --- Through HTTPS as varchar2:
  s:=xt_http.get_string('https://google.com');

  --- Through HTTP as varchar2
  s:=xt_http.get_string('http://ya.ru');
end;
/
select length( xt_http.get_page('https://google.com') ) page_size from dual
Categories: Development

Oracle Priority Support Infogram for 08-OCT-2015

Oracle Infogram - Thu, 2015-10-08 14:46

Oracle OpenWorld

The session schedules are rolling in. Here are a few:






RDBMS

Last week on AskTom, from All Things SQL.

Database Insider - October 2015 issue now available, from Exadata Partner Community – EMEA.


Security

What Is SQL Injection and How to Stop It, from All Things SQL. It’s been a while since we ran a posting on SQL injection. It’s an easy trap to fall into in coding, so this may be a good time to review your apps and make sure you aren’t vulnerable.

Java


Hyperion



EPM Patch Set Updates - September 2015, from Business Analytics - Proactive Support.

Oracle Utilities

Oracle Utilities Customer Care and Billing 2.5.0.1.0 available, from The Shorten Spot (@theshortenspot).

Demantra


User Defined Field Fact History, from the Oracle Primavera Analytics Blog.

Opinion

A bit of product evangelism combined with some prophecy and analysis on ZFS storage: This Is Our Time, from The Wonders of ZFS Storage.

EBS

From the Oracle E-Business Suite Support blog:




From the Oracle E-Business Suite Technology blog:




PeopleSoft Streams from Oracle University

Jim Marion - Thu, 2015-10-08 14:45

In February of this year, Oracle University launched the PeopleSoft Learning Stream. Oracle's Learning Streams are short, educational vignettes. I was given the privilege of recording 6 streams:

  • Using JavaScript with Pagelet Wizard is a 21 minute video showing you how to use Pagelet Wizard to convert a PeopleSoft query into an interactive D3 chart, a navigation collection into a carousel, a navigation collection into an accordion, and RequireJS for JavaScript dependency management.
  • REST Query Access Service is a 15 minute session showing you how to craft a Query Access Service REST URL.
  • Working with JSON in PeopleSoft Document Technology is a 23 minute video demonstrating how to use the PeopleCode Document, Compound, and Collection objects to read and write JSON.
  • Basic Java API with PeopleCode is a 26 minute session showing you how to use the delivered Java API with PeopleCode. This session covers constructors, instance methods, properties, and static method invocation. Java objects demonstrated include String, Hashtable, Regular Expression Pattern and Matcher, arrays, and String.format.
  • Intermediate Java API with PeopleCode is a 38 minute video that shows you how to configure JDeveloper to write Java for the PeopleSoft Application and Process Scheduler servers and provides some examples of writing and deploying Java to a PeopleSoft application server. Note: in this session you get to watch me attempt to troubleshoot an App Engine ABEND.
  • Advanced Java API with PeopleCode is a 26 minute recording showing you how to use Java Reflection to remove PeopleCode ambiguity as well as how to use JavaScript to avoid reflection.

You can access all of my streams here. From this page you can preview the first 2 minutes of each video or subscribe for unlimited access to all of the videos in the Oracle PeopleSoft Learning Stream.

Presidents of USA and their Birth Signs – Sankey Visualization

Nilesh Jethwa - Thu, 2015-10-08 14:00

In this analysis, we will visualize the relation between the Age at Presidency, State of Birth and birth sign.

Read more at: www.infocaptor.com/dashboard/presidents-of-usa-and-their-birth-signs-sankey-visualization

Amazon Quick Sight – BI on Cloud?

Dylan's BI Notes - Thu, 2015-10-08 09:08
In my post Data Warehouses on Cloud – Amazon Redshift, I mentioned that what would be really useful is providing BI on Cloud, not just Data Warehouse on Cloud. I felt that BICS makes more sense comparing to Amazon Redshfit. I discussed with a couple of people last night in a meetup.  Some of them […]
Categories: BI & Warehousing

How to delete older emails from GMAIL

Arun Bavera - Wed, 2015-10-07 09:40

image

image

 

Other category:

category: social older_than:45d

Categories: Development

Why go to Oracle OpenWorld?

Duncan Davies - Wed, 2015-10-07 08:00

We’re a shade under a month away from the biggest event in the calendar for those that work in the Oracle marketplace – the Oracle OpenWorld Conference.

It runs every year in San Francisco and draws a massive 60,000 attendees from 145 countries (plus 2.1 million online attendees). That’s huge.

There are more than 2,500 sessions from ~3,600 speakers, approximately half of which are customers/partners and half are Oracle themselves. As well as the sessions there are the demo grounds and the exhibition hall, all great places for networking with people that you’ve either not met before or have only ever come across online. You get quality face-time with top developers and execs, who are normally hidden behind many levels of Oracle Support. These are the people who have designed and written the products and services that we’ll be using over the coming years, so meeting up with them is priceless.

If you register before the event, it’s $2,450 (about £1,600).

I’m lucky to have the chance to go again this year, and I know already that it’s going to have huge value for both me and Cedar. Both my colleague, Graham, and I were lucky enough to be selected to speak (his session is on Fluid, mine is on Selective Adoption – the two hottest topics in PeopleSoft right now).

Graham also produced this lively promo video:

This (above) is what we look like, it’d be great to say hello to you if you’re around. Likewise, if you’re coming to either of our sessions let us know and we’ll be sure to say hi.

As a nice bonus, we get to see Elton John and Beck at the Appreciation Event!

I’m really looking forward to seeing and hearing about the very latest from the PeopleSoft and Fusion/Taleo worlds. Look out for a Cedar event when we return where we can share everything with you.


About My Son, Chris Silva, Amazing Artist, Father and All-Around Human Being

FeuerThoughts - Tue, 2015-10-06 14:59
"For the record...."




Chris is the 2015 recipient of a 3arts grant, which makes me incredibly proud and also gives me the opportunity to share his professional art bio (I mostly experience him these days as Papa to my two wonderful granddaughters).

Born in Puerto Rico, Chris Silva has been a prominent figure in Chicago’s graffiti and skateboarding scenes since the 1980s, as well as an enthusiastic fan of a wide range of music genres which have resulted from the influence of metropolitan life. Building on his solid graffiti art foundation, Silva proceeded to play a significant role in the development of what is now commonly referred to as "street art." He now splits his time between working on large-scale commissions, producing gallery oriented work, and leading youth-involved public art projects. As a self-taught sound artist with roots in DJ culture, Silva also anchors a collaborative recording project known as This Mother Falcon, and has recently started integrating his audio compositions into his installation work.

In the early 90s, Silva worked on a mural with the Chicago Public Art Group and was eventually brought on board to help lead community art projects with other urban youth. As a result, the act of facilitating art experiences for young people has become an important part of his art practice, and he regularly includes students as collaborators on large-scale artwork that often leans heavily on improvisation. Over the years, Silva has helped orchestrate youth art projects both independently and in partnership with Chicago Public Art Group, Young Chicago Authors, Gallery 37, Yollocalli Arts Reach, After School Matters, and the School of The Art Institute of Chicago.

Silva was awarded a major public art commission by the Chicago Transit Authority to create a mosaic for the Pink Line California Station (2004); created block-long murals in Chicago's Loop “You Are Beautiful” (2006); created a sculpture for the Seattle Sound Transit System (2008); won the Juried Award for Best 3D Piece at Artprize (2012); and created large commissions for 1871 Chicago (2013), the City of Chicago, LinkedIn, CBRE (2014), OFS Brands, and The Prudential Building (2015). He has exhibited in Chicago, San Francisco, Los Angeles, New York City, Philadelphia, London, Melbourne, Copenhagen, and The International Space Station. In 2007 Silva received an Artist Fellowship Award from The Illinois Arts Council.
Categories: Development

Top 8 Strategies to Thrive at Oracle OpenWorld

VitalSoftTech - Tue, 2015-10-06 14:08
Yes! It’s that time of the year again when we start planning for the premier Oracle OpenWorld. So let’s get right into it! Read more here – Top 8 Strategies to Thrive at Oracle OpenWorld o o o o o o Realted article – Advanced Sessions at Oracle OpenWorld 2015 I I I
Categories: DBA Blogs

Fundamentals of SQL Writeback in Dodeca

Tim Tow - Mon, 2015-10-05 22:00
One of the features of Dodeca is read-write functionality to SQL databases.  We often get questions as to how to write data back to a relational database, so I thought I would post a quick blog entry for our customers to reference.

This example will use a simple table structure in SQL Server though the concepts are the same when using Oracle, DB2, and most other relational databases.  The example will use a simple Dodeca connection to a JDBC database.  Here is the Dodeca SQL Connection object used for the connection.

The table I will use for this example was created with the following CREATE TABLE  statement.

CREATE TABLE [dbo].[Test](
[TestID] [int] IDENTITY(1,1) NOT NULL,
[TestCode] [nvarchar](50) NULL,
[TestName] [nvarchar](50) NULL,
  CONSTRAINT [PK_Test] PRIMARY KEY CLUSTERED 
  ([TestID] ASC)
)

First, I used the Dodeca SQL Excel View Wizard to create a simple view in Dodeca to retrieve the data into a spreadsheet.  The view, before setting up writeback capabilities, looks like this.

To make this view writeable, follow these steps.
  1. Add the appropriate SQL insert, update, or delete statements to the Dodeca SQL Passthrough Dataset object.  The values to be replaced in the SQL statement must be specified using the notation @ColumnName where ColumnName is the column name, or column alias, of the column containing the data.
  2. Add the column names of the primary key for the table to the PrimaryKey property of the SQL Passthrough DataSet object.
  3. Depending on the database used, define the column names and their respective JDBC datatypes in the Columns property of the SQL Passthrough Dataset.  This mapping is optional for SQL Server because Dodeca can obtain the required information from the Microsoft JDBC driver, however, the Oracle and DB2 JDBC drivers do not provide this information and it must be entered by the developer.
For insert, update, and delete operations, Dodeca parses the SQL statement to read the parameters that use the @ indicator and creates a JDBC prepared statement to execute the statements.  The prepared statement format is very efficient as it compiles the SQL statement once and then executes it multiple times.  Each inserted row is also passed to the server during the transaction.  The values from each row are then used in conjunction with the prepared statement to perform the operation.

Here is the completed Query definition.


Next, modify the DataSetRanges property of the Dodeca View object and, to enable insert operations, set the AllowAddRow property to True.  Note that if you added update and/or delete SQL to your SQL Passthrough Dataset object, be sure to enable those operations on the worksheet via the AllowDeleteRow and AllowModifyRow properties.

Once this step is complete, you can run the Dodeca View, add a row, and press the Save button to save the record to the relational database.



The insert, update, and delete functionalities using plain SQL statements is limited to operations on a single table.  If you need to do updates on multiple tables, you must use stored procedures to accomplish the functionality.  You can call a stored procedure in Dodeca using syntax similar to the following example:

{call sp_InsertTest(@TestCode, @TestName)}

Dodeca customers can contact support for further information at support@appliedolap.com.
Categories: BI & Warehousing

IBM Bluemix - Specify only Liberty buildpack features you require

Pas Apicella - Mon, 2015-10-05 21:22
I am more often then not using spring boot applications on IBM Bluemix and most of what I need is packaged with the application from JPA or JDBC, drivers, Rest etc. Of course with IBM Bluemix we can specify which build pack we wish to use but by default for java applications LIberty is used.

When a stand-alone application is deployed, a default Liberty configuration is provided for the application. The default configuration enables the following Liberty features:
  • beanValidation-1.1
  • cdi-1.2
  • ejbLite-3.2
  • el-3.0
  • jaxrs-2.0
  • jdbc-4.1
  • jndi-1.0
  • jpa-2.1
  • jsf-2.2
  • jsonp-1.0
  • jsp-2.3
  • managedBeans-1.0
  • servlet-3.1
  • websocket-1.1
  • icap:managementConnector-1.0
  • appstate-1.0
Here is how I strip out some of what isn't required in my Liberty runtime container to a bare minimal of what I need.

manifest.yml

applications:
 - name: pas-speedtest
   memory: 512M
   instances: 1
   path: ./demo-0.0.1-SNAPSHOT.jar
   host: pas-speedtest
   domain: mybluemix.net
   env:
     JBP_CONFIG_LIBERTY: "app_archive: {features: [jsp-2.3, websocket-1.1, servlet-3.1]}"


 More Information

https://www.ng.bluemix.net/docs/starters/liberty/index.html#optionsforpushinglibertyapplications


Categories: Fusion Middleware

Uploading 26M StackOverflow Questions into Oracle 12c

Marcelo Ochoa - Mon, 2015-10-05 17:42
Just for fun or testing in-memory capabilities of Oracle 12c

Following the post Import 10M Stack Overflow Questions into Neo4j In Just 3 Minutes I modified the python script to basically include the foreign key columns not included into the graph database design and required in a relational model.
Python files to_csv.py and utils.py can be download from my drive, basically it adds these two lines:
                el.get('parentid'),
                el.get('owneruserid'),
when generating the output file csvs/posts.csv, the idea is to convert the StackOverflow export files:
-rw-r--r-- 1 root root   37286997 ago 18 12:50 stackoverflow.com-PostLinks.7z
-rw-r--r-- 1 root root 7816218683 ago 18 13:52 stackoverflow.com-Posts.7z
-rw-r--r-- 1 root root     586861 ago 18 13:52 stackoverflow.com-Tags.7z
-rw-r--r-- 1 root root  160468734 ago 18 13:54 stackoverflow.com-Users.7z
-rw-r--r-- 1 root root  524354790 ago 18 13:58 stackoverflow.com-Votes.7z
-rw-r--r-- 1 root root 2379415989 sep  2 14:28 stackoverflow.com-Comments.7z
-rw-r--r-- 1 root root  112105812 sep  2 14:29 stackoverflow.com-Badges.7z
to a list of CSV files for quick importing into Oracle 12c RDBMS using external tables, here the list of converted files and theirs sizes:
3,8G         posts.csv
287M posts_rel.csv
524K tags.csv
517M tags_posts_rel.csv
355M users.csv
427M users_posts_rel.csv
with above files and an Oracle 12c running in a Docker container as is described into my previous post On docker, Ubuntu and Oracle RDBMS, I executed these steps:
- logged as SYSalter system set sga_max_size=4G scope=spfile;
alter system set sga_target=4G scope=spfile;
alter system set inmemory_size=2G scope=spfile;
create user sh identified by sh
   default tablespace ts_data
   temporary tablespace temp
   quota unlimited on ts_data;
grant connect,resource,luceneuser to sh;
create directory data_dir1 as '/mnt';
create directory tmp_dir as '/tmp';
grant all on directory data_dir1 to sh;
grant all on directory tmp_dir to sh;
it basically create a new user and directories to be used by the external tables. Note that the CSV files are available into the Docker machine as /mnt directory, I am running my Docker images with:
docker run --privileged=true --ipc=host --volume=/var/lib/docker/dockerfiles/stackoverflow.com/csvs:/mnt --volume=/mnt/backup/db/ols:/u01/app/oracle/data --name ols --hostname ols --detach=true --publish=1521:1521 --publish=9099:9099 oracle-12102
Then logged as SH user:
- Importing users
create table users_external
( user_id            NUMBER(10),
  display_name VARCHAR2(4000),
  reputation       NUMBER(10),
  aboutme         VARCHAR2(4000),
  website_url    VARCHAR2(4000),
  location          VARCHAR2(4000),
  profileimage_url VARCHAR2(4000),
  views             NUMBER(10),
  upvotes          NUMBER(10),
  downvotes     NUMBER(10)
)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
    fields
            terminated BY ','
            optionally enclosed BY '"'
            lrtrim
            missing field VALUES are NULL
  )
  location (data_dir1:'users.csv')
 )
 parallel
 reject limit unlimited;CREATE TABLE so_users
   TABLESPACE ts_data
   STORAGE (INITIAL 8M NEXT 8M)
   PARALLEL
   NOLOGGING
   COMPRESS FOR ALL OPERATIONS
      as (select * from users_external);
-- Elapsed: 00:00:22.76
ALTER TABLE so_users ADD PRIMARY KEY (user_id);
-- Elapsed: 00:00:13.08
create index so_users_display_name_idx on so_users(display_name);
-- Elapsed: 00:00:08.01
- Importing Posts
create table posts_external
( post_id      NUMBER(10),
  parent_id   NUMBER(10),
  user_id      NUMBER(10),
  title            VARCHAR2(4000),
  body          CLOB,
  score         NUMBER(10),
  views        NUMBER(10),
  comments NUMBER(10)
)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
    fields
            terminated BY ','
            optionally enclosed BY '"'
            lrtrim
            missing field VALUES are NULL
  )
  location (data_dir1:'posts.csv')
 )
 parallel
 reject limit unlimited;
CREATE TABLE so_posts
   TABLESPACE ts_data
   STORAGE (INITIAL 8M NEXT 8M)
   PARALLEL
   NOLOGGING
   COMPRESS FOR ALL OPERATIONS
      as (select * from posts_external);
-- Elapsed: 00:14:20.89
ALTER TABLE so_posts ADD PRIMARY KEY (post_id);
-- Elapsed: 00:02:35.86
-- purge posts associated to no imported users
delete from so_posts where user_id not in (select user_id from so_users);
-- Elapsed: 00:02:41.64
create index so_posts_user_id_idx on so_posts(user_id);
-- Elapsed: 00:01:34.87
ALTER TABLE so_posts ADD CONSTRAINT fk_so_user FOREIGN KEY (user_id) REFERENCES so_users(user_id);
-- Elapsed: 00:00:09.28
Note that 26 million posts where imported in 14 minutes, not so bad considering that CSV source was at an external USB 2.0 drive and Oracle 12c tablespaces where placed at an USB 3.0 drive, here a screenshot showing the IO bandwidth consumed in both drivers.

only 4.8 Mb/s for reading from sdb (CSV) and 9.7 Mb/s for writing at sdc1 (ts_data).
- Importing tags
create table tags_external
( tag_id      VARCHAR2(4000)
)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
    fields
            terminated BY ','
            optionally enclosed BY '"'
            lrtrim
            missing field VALUES are NULL
  )
  location (data_dir1:'tags.csv')
 )
 parallel
 reject limit unlimited;
CREATE TABLE so_tags
   TABLESPACE ts_data
   STORAGE (INITIAL 8M NEXT 8M)
   PARALLEL
   NOLOGGING
   COMPRESS FOR ALL OPERATIONS
      as (select * from tags_external);
-- Elapsed: 00:00:00.55
create table tags_posts_external
( post_id      NUMBER(10),
  tag_id      VARCHAR2(4000)
)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
    fields
            terminated BY ','
            optionally enclosed BY '"'
            lrtrim
            missing field VALUES are NULL
  )
  location (data_dir1:'tags_posts_rel.csv')
 )
 parallel
 reject limit unlimited;
CREATE TABLE so_tags_posts
   TABLESPACE ts_data
   STORAGE (INITIAL 8M NEXT 8M)
   PARALLEL
   NOLOGGING
   COMPRESS FOR ALL OPERATIONS
      as (select * from tags_posts_external);
-- Elapsed: 00:00:43.75
-- purge tags associated to no imported posts
delete from so_tags_posts where post_id not in (select post_id from so_posts);
-- Elapsed: 00:02:42.00
create index so_tags_posts_post_id_idx on so_tags_posts(post_id);
-- Elapsed: 00:00:43.29
ALTER TABLE so_tags_posts ADD CONSTRAINT fk_so_posts FOREIGN KEY (post_id) REFERENCES so_posts(post_id);
-- Elapsed: 00:01:16.65
Note that as in posts<->users one-to-many relation, tags<->posts is also a one-to-many relation and some posts referenced by a few tags where not imported due character-encoding errors.
As a summary of the above steps 26 millions posts of 4.5 millions registered users where imported; 41K distinct tags are used with an average of 1.11 tag by post (29M tags/posts rows).
Next blog post will be about using Oracle 12c in-memory features to query this corpus data.

OTN at Oracle OpenWorld Group - Join today!

OTN TechBlog - Mon, 2015-10-05 12:27

Join the OTN at Oracle OpenWorld group on the OTN Community Platform!  This group is designed to keep you in the know about all the GREAT activities and events that the Team OTN is planning/organizing for Oracle OpenWorld in San Francisco this October (24th to 28th).

Some of the events/activities to look forward to -

Community Events - RAC Attack and Blogger Meetup.

Networking Opportunities - Sunday Kick off Party, Cloud Hour

NEW activities! Graffiti Wall and giant games plus Make Your Own T-Shirt is back with NEW art!

15221_OTN Lounge-Graphics_FINAL 1.jpg 15221_OTN Lounge-Graphics_FINAL 4.jpg15221_OTN Lounge-Graphics_FINAL 5.jpg15221_OTN Lounge-Graphics_FINAL 7.jpg15221_OTN Lounge-Graphics_FINAL 8.jpg

We hope to see you there!

TEAM OTN


Do we really need semantic layer from OBIEE?

Dylan's BI Notes - Mon, 2015-10-05 10:22
Not all BI tools have the semantic layer.  For example, Oracle Discoverer seems not having a strong semantic layer. This page summarizes what OBIEE semantic layer can do for you… BI Platform Semantic Layer I think that if these features can be accomplished in other ways and can be proven that they are not necessary, […]
Categories: BI & Warehousing

Cedar’s Selective Adoption Event recap

Duncan Davies - Mon, 2015-10-05 05:00

A week or so back Cedar held a free Selective Adoption event for clients and friends. The idea behind the event was to help those on 9.2 already to make the most of what Selective Adoption can offer, and to show those that are yet to make the step to 9.2 what the future could look like.

The event went really well. Jeff Robbins opened the proceedings, giving an overview of the technology and what the roadmap looks like. Then Graham Smith and I did a couple of slots each on how the process works, what you need to get the technology up and running, the huge value it can bring, and the areas that you should do yourself versus the ones where it’s cheaper to get help.

03 - Graham Dives DeeperGraham diving deep into the Tech

04 - Duncan Discusses OptionsCovering the Options

After the event we all decamped to a nearby pub for less formal chat. It was really great to see that some clients still wanted more however. Happily, Graham was able to do a live demo from the middle of the pub, showing that we can ‘walk the walk’ as well as talking about it …

08 - Graham live demos in the pubLive demo in the pub


Data Lake vs. Data Warehouse

Dylan's BI Notes - Sun, 2015-10-04 18:18
These are different concepts. Data Lake – Collect data from various sources in a central place.  The data are stored in the original form.  Big data technologies are used and thus the typical data storage is Hadoop HDFS. Data Warehouse – “Traditional” way of collecting data from various sources for reporting.  The data are consolidated […]
Categories: BI & Warehousing

Use DMZ to access BI from outside firewall

Dylan's BI Notes - Fri, 2015-10-02 14:12
DMZ is a technology that allows you to configure your network to be accessible outside firewall. Some of users may want to access some of corporate reports from mobile or from their personal computers. While VPN and Citrix may be useful for these cases, DMZ can provide another option. A good article – OBIEE Security […]
Categories: BI & Warehousing

What I Wanted to Tell Terry Bradshaw

Cary Millsap - Thu, 2015-10-01 17:23
I met Terry Bradshaw one time. It was about ten years ago, in front of a movie theater near where I live.

When I was little, Terry Bradshaw was my enemy because, unforgivably to a young boy, he and his Pittsburgh Steelers kept beating my beloved Dallas Cowboys in Super Bowls. As I grew up, though, his personality on TV talk shows won me over, and I enjoy watching him to this day on Fox NFL Sunday. After learning a little bit about his life, I’ve grown to really admire and respect him.

I had heard that he owned a ranch not too far from where I live, and so I had it in mind that inevitably I would meet him someday, and I would say thank you. One day I had that chance.

I completely blew it.

My wife and I saw him there at the theater one day, standing by himself not far from us. It seemed like if I were to walk over and say hi, maybe it wouldn’t bother him. So I walked over, a little bit nervous. I shook his hand, and I said, “Mr. Bradshaw, hi, my name is Cary.” I would then say this:

I was a big Roger Staubach fan growing up. I watched Cowboys vs. Steelers like I was watching Good vs. Evil.

But as I’ve grown up, I have gained the deepest admiration and respect for you. You were a tremendous competitor, and you’re one of my favorite people to see on TV. Every time I see you, you bring a smile to my face. You’ve brought joy to a lot of people.

I just wanted to say thank you.
Yep, that’s what I would say to Terry Bradshaw if I got the chance. But that’s not how it would turn out. How it actually went was like this, …my big chance:

Me: I was a big Roger Staubach fan growing up.
TB: Hey, so was I!
Me: (stunned)
TB: (turns away)
The End
I was heartbroken. It bothers me still today. If you know Terry Bradshaw or someone who does, I wish you would please let him know. It would mean a lot to me.

…I did learn something that day about the elevator pitch.

Oracle Priority Support Infogram for 01-OCT-2015

Oracle Infogram - Thu, 2015-10-01 14:42

RDBMS


PL/SQL

A Surprising Program, from Oracle Database PL/SQL and EBR.

Data Warehouse

DOP Downgrades, or Avoid The Ceiling, from The Data Warehouse Insider blog.

WebLogic


Java


Creating Games with JavaFX 8: Case Study, from The Java Tutorials Blog.

OAG

The 10 most recently created notes for OAG as of 24 Sept. 2015., from Proactive Support - Java Development using Oracle Tools.

Ops Center

Changing an Asset's Name, from the Oracle Ops Center blog.

Data Integration


SOA

Top tweets SOA Partner Community – September 2015, from the the SOA & BPM Partner Community Blog.

Real User Monitoring

How to Configure Used ID Identification, from Real User Monitoring.

Solaris

Solaris: Identifying EFI disks, from Giri Mandalika's Repository.

EBS

From the Oracle E-Business Suite Support blog:




Finally Eliminate Those Duplicate WIP Transactions!


Pages

Subscribe to Oracle FAQ aggregator