Feed aggregator

MySQL 5.6.20-4 and Oracle Linux DTrace

Wim Coekaerts - Thu, 2014-07-31 10:57
The MySQL team just released MySQL 5.6.20. One of the cool new things for Oracle Linux users is the addition of MySQL DTrace probes. When you use Oracle Linux 6, or 7 with UEKr3 (3.8.x) and the latest DTrace utils/tools, then you can make use of this. MySQL 5.6 is available for install through ULN or from public-yum. You can just install it using yum.

# yum install mysql-community-server

Then install dtrace utils from ULN.

# yum install dtrace-utils

As root, enable DTrace and allow normal users to record trace information:

# modprobe fasttrap
# chmod 666 /dev/dtrace/helper

Start MySQL server.

# /etc/init.d/mysqld start

Now you can try out various dtrace scripts. You can find the reference manual for MySQL DTrace support here.

Example1

Save the script below as query.d.

#!/usr/sbin/dtrace -qws
#pragma D option strsize=1024


mysql*:::query-start /* using the mysql provider */
{

  self->query = copyinstr(arg0); /* Get the query */
  self->connid = arg1; /*  Get the connection ID */
  self->db = copyinstr(arg2); /* Get the DB name */
  self->who   = strjoin(copyinstr(arg3),strjoin("@",
     copyinstr(arg4))); /* Get the username */

  printf("%Y\t %20s\t  Connection ID: %d \t Database: %s \t Query: %s\n", 
     walltimestamp, self->who ,self->connid, self->db, self->query);

}

Run it, in another terminal, connect to MySQL server and run a few queries.

# dtrace -s query.d 
dtrace: script 'query.d' matched 22 probes
CPU     ID                    FUNCTION:NAME
  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:21 root@localhost	  Connection ID: 5 	 Database:  	 
    Query: select @@version_comment limit 1

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:28 root@localhost	  Connection ID: 5 	 Database:  	 
    Query: SELECT DATABASE()

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:28 root@localhost	  Connection ID: 5 	 Database: database 	 
    Query: show databases

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:28 root@localhost	  Connection ID: 5 	 Database: database 	 
    Query: show tables

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:31 root@localhost	  Connection ID: 5 	 Database: database 	 
    Query: select * from foo

Example 2

Save the script below as statement.d.

#!/usr/sbin/dtrace -s

#pragma D option quiet

dtrace:::BEGIN
{
   printf("%-60s %-8s %-8s %-8s\n", "Query", "RowsU", "RowsM", "Dur (ms)");
}

mysql*:::update-start, mysql*:::insert-start,
mysql*:::delete-start, mysql*:::multi-delete-start,
mysql*:::multi-delete-done, mysql*:::select-start,
mysql*:::insert-select-start, mysql*:::multi-update-start
{
    self->query = copyinstr(arg0);
    self->querystart = timestamp;
}

mysql*:::insert-done, mysql*:::select-done,
mysql*:::delete-done, mysql*:::multi-delete-done, mysql*:::insert-select-done
/ self->querystart /
{
    this->elapsed = ((timestamp - self->querystart)/1000000);
    printf("%-60s %-8d %-8d %d\n",
           self->query,
           0,
           arg1,
           this->elapsed);
    self->querystart = 0;
}

mysql*:::update-done, mysql*:::multi-update-done
/ self->querystart /
{
    this->elapsed = ((timestamp - self->querystart)/1000000);
    printf("%-60s %-8d %-8d %d\n",
           self->query,
           arg1,
           arg2,
           this->elapsed);
    self->querystart = 0;
}

Run it and do a few queries.

# dtrace -s statement.d 
Query                                                        RowsU    RowsM    Dur (ms)
select @@version_comment limit 1                             0        1        0
SELECT DATABASE()                                            0        1        0
show databases                                               0        6        0
show tables                                                  0        2        0
select * from foo                                            0        1        0

Test your Application with the WebLogic Maven plugin

Edwin Biemond - Thu, 2014-07-31 06:47
In this blogpost I will show you how easy it is to add some unit tests to your application when you use Maven together with the 12.1.3 Oracle software ( like WebLogic , JDeveloper or Eclipse OEPE). To demonstrate this, I will create a RESTful Person Service in JDeveloper 12.1.3 which will use the Maven project layout. We will do the following: Create a Project and Application based on a Maven

Developing with JAX-RS 2.0 for WebLogic Server 12.1.3

Steve Button - Thu, 2014-07-31 01:47
In an earlier post on the topic of Using JAX-RS 2.0 with WebLogic Server 12.1.3, I described that we've utilized the shared-library model to distribute and enable it.

This approach exposes the JAX-RS 2.0 API and enlists the Jersey 2.x implementation on the target server, allowing applications to make use of it as when they are deployed through a library reference in a weblogic deployment descriptor.

The one resulting consideration here from a development perspective is that since this API is not part of the javaee-api-6.jar nor is it a default API of the server, it's not available in the usual development API libraries that WebLogic provides.

For instance the $ORACLE_HOME/wlserver/server/lib/api.jar doesn't contain a reference to the JAX-RS 2.0 API, nor do the set of maven artifacts we produce and push to a repository via the oracle-maven-sync plugin contain the javax.ws.rs-api-2.0.jar library.

To develop an application using JAX-RS 2.0 to deploy to WebLogic Server 12.1.3, the javax.ws.rs-api-2.0.jar needs to be sourced and added to the development classpath.

Using maven, this is very simple to do by adding an additional dependency for the javax.ws.rs:javax.ws.rs-api:2.0 artifact that is hosted in public maven repositories:

    <dependency>
<groupid>javax.ws.rs</groupid>
<artifactid>javax.ws.rs-api</artifactid>
<version>2.0</version>
<scope>provided</scope>
</dependency>

Note here that the scope is set to provided since the library will be realized at runtime through jax-rs-2.0.war shared-library that it deployed to the target server and referenced by the application. It doesn't need to be packaged with the application to deploy to WebLogic Server 12.1.3.

For other build systems using automated dependency management such as Gradle or Ant/Ivy, the same sort of approach can be used.

For Ant based build systems, the usual approach of obtaining the necessary API libraries and adding them to the development CLASSPATH will work. Be mindful that there is no need to bundle the jax.ws.rs-ap-2.0.jar in the application itself as it will be available from the server when correctly deployed and referenced in the weblogic deployment descriptor.

"Private" App Class Members

Jim Marion - Thu, 2014-07-31 01:23

I was reading Lee Greffin's post More Fun with Application Packages -- Instances and stumbled across this quote from PeopleBooks:

A private instance variable is private to the class, not just to the object instance. For example, consider a linked-list class where one instance needs to update the pointer in another instance.

What exactly does that mean? I did some testing to try and figure it out. Here is what I came up with:

  1. It is still an instance variable which means each in-memory object created from the App Class blue print has its own memory placeholder for each instance member.
  2. Instances of other classes can't interact with private instance members.
  3. Instances of the exact same class CAN interact with private members of a different instance.
  4. Private instance members differ from static members in other languages because they don't all share the same pointer (pointer, reference, whatever).

I thought it was worth proving so here is my sample. It is based on the example suggested in PeopleBooks:

For example, consider a linked-list class where one instance needs to update the pointer in another instance.

The linked list is just an item with a pointer to the next item (forward only). A program using it keeps a pointer to the "head" and then calls next() to iterate over the list. It is a very common pattern so I will forgo further explanation. Here is a quick implementation (in the App Package JJM_COLLECTIONS):

class ListItem
method ListItem(&data As any);
method linkTo(&item As JJM_COLLECTIONS:ListItem);
method next() Returns JJM_COLLECTIONS:ListItem;
method getData() Returns any;
private
instance JJM_COLLECTIONS:ListItem &nextItem_;
instance any &data_;
end-class;

method ListItem
/+ &data as Any +/
%This.data_ = &data;
end-method;

method linkTo
/+ &item as JJM_COLLECTIONS:ListItem +/
&item.nextItem_ = %This;
end-method;

method next
/+ Returns JJM_COLLECTIONS:ListItem +/
Return %This.nextItem_;
end-method;

method getData
/+ Returns Any +/
Return %This.data_;
end-method;

Notice the linkTo method sets the value of the private instance member of a remote instance (its parameter), NOT the local instance. This is what is meant by private to the class, not private to the instance. Each instance has its own &nextItem_ instance member and other instances of the exact same class can manipulate it. Here is the test case I used to test the remote manipulation implementation:

import TTS_UNITTEST:TestBase;
import JJM_COLLECTIONS:ListItem;

class TestListItem extends TTS_UNITTEST:TestBase
method TestListItem();
method Run();
end-class;

method TestListItem
%Super = create TTS_UNITTEST:TestBase("TestListItem");
end-method;

method Run
/+ Extends/implements TTS_UNITTEST:TestBase.Run +/
Local JJM_COLLECTIONS:ListItem &item1 =
create JJM_COLLECTIONS:ListItem("Item 1");
Local JJM_COLLECTIONS:ListItem &item2 =
create JJM_COLLECTIONS:ListItem("Item 2");

&item2.linkTo(&item1);

%This.AssertStringsEqual(&item1.next().getData(), "Item 2",
"The next item is not Item 2");
%This.Msg(&item1.next().getData());
end-method;

The way it is written requires you to create the second item and then call the second item's linkTo method to associate it with the head (or previous) element.

Now, just because you CAN manipulate a private instance member from a remote instance doesn't mean you SHOULD. Doing so seems to violate encapsulation. You could accomplish the same thing by reversing the linkTo method. What if we flipped this around so you created the second item, but called the first item's linkTo? It is really the first item we want to manipulate in a forward only list (now, if it were a multi-direction list perhaps we would want to manipulate the &ampprevItem_ member?). Here is what the linkTo method would look like:

method linkTo
/+ &item as JJM_COLLECTIONS:ListItem +/
%This.nextItem_ = &item;
end-method;

Now what if we wanted a forward AND reverse linked list? Here is where maybe the ability to manipulate siblings starts to seem a little more reasonable (I still think there is a better way, but humor me):

class ListItem
method ListItem(&data As any);
method linkTo(&item As JJM_COLLECTIONS:ListItem);
method next() Returns JJM_COLLECTIONS:ListItem;
method prev() Returns JJM_COLLECTIONS:ListItem;
method remove() Returns JJM_COLLECTIONS:ListItem;
method getData() Returns any;
private
instance JJM_COLLECTIONS:ListItem &nextItem_;
instance JJM_COLLECTIONS:ListItem &prevItem_;
instance any &data_;
end-class;

method ListItem
/+ &data as Any +/
%This.data_ = &data;
end-method;

method linkTo
/+ &item as JJM_COLLECTIONS:ListItem +/
REM ** manipulate previous sibling;
&item.nextItem_ = %This;
%This.prevItem_ = &item;
end-method;

method next
/+ Returns JJM_COLLECTIONS:ListItem +/
Return %This.nextItem_;
end-method;

method prev
/+ Returns JJM_COLLECTIONS:ListItem +/
Return %This.prevItem_;
end-method;

method remove
/+ Returns JJM_COLLECTIONS:ListItem +/
%This.nextItem_.linkTo(%This.prevItem_);
REM ** Or manipulate both siblings;
REM %This.prevItem_.nextItem_ = %This.nextItem_;
REM %This.nextItem_.prevItem_ = %This.prevItem_;
Return %This.prevItem_;
end-method;

method getData
/+ Returns Any +/
Return %This.data_;
end-method;

And here is the final test case

import TTS_UNITTEST:TestBase;
import JJM_COLLECTIONS:ListItem;

class TestListItem extends TTS_UNITTEST:TestBase
method TestListItem();
method Run();
end-class;

method TestListItem
%Super = create TTS_UNITTEST:TestBase("TestListItem");
end-method;

method Run
/+ Extends/implements TTS_UNITTEST:TestBase.Run +/
Local JJM_COLLECTIONS:ListItem &item1 =
create JJM_COLLECTIONS:ListItem("Item 1");
Local JJM_COLLECTIONS:ListItem &item2 =
create JJM_COLLECTIONS:ListItem("Item 2");
Local JJM_COLLECTIONS:ListItem &item3 =
create JJM_COLLECTIONS:ListItem("Item 3");

&item2.linkTo(&item1);

%This.AssertStringsEqual(&item1.next().getData(), "Item 2",
"Test 1 failed. The next item is not Item 2");
%This.AssertStringsEqual(&item2.prev().getData(), "Item 1",
"Test 2 failed. The prev item is not Item 1");

&item3.linkTo(&item2);
%This.AssertStringsEqual(&item1.next().next().getData(), "Item 3",
"Test 3 failed. The next.next item is not Item 3");
%This.AssertStringsEqual(&item1.next().next().prev().getData(), "Item 2",
"Test 4 failed. The prev item is not Item 2");

Local JJM_COLLECTIONS:ListItem &temp = &item2.remove();
%This.AssertStringsEqual(&item1.next().getData(), "Item 3",
"Test 5 failed. The next item is not Item 3");
%This.AssertStringsEqual(&item1.next().prev().getData(), "Item 1",
"Test 6 failed. The prev item is not Item 1");

end-method;

I hope that helps clear up some of the confusion around the term "private" as it relates to Application Classes.

Using Eclipse (OEPE) to Develop Applications using WebSocket and JSON Processing API with WebLogic Server 12.1.3

Steve Button - Wed, 2014-07-30 02:11
Following from my last posting, I thought I'd also show how Eclipse (OEPE) makes the new Java EE 7 APIs available from Oracle WebLogic Server 12.1.3.

The first step was downloading and installing the Oracle Enterprise Pack for Eclipse (OEPE) distribution from OTN.

http://www.oracle.com/technetwork/developer-tools/eclipse/downloads/index.html

Firing up Eclipse, the next step is to add a new Server type for Oracle WebLogic Server 12.1.3, pointing at a local installation.






With that done, I then created a new Dynamic Web Project that was directed to work against the new WebLogic Server type I'd created.  Looking at the various properties for the project, you can see that the WebSocket 1.0 and JSON Programming 1.0 libraries are automatically picked up and added to the Java Build Path of the application, by virtue of being referenced as part of the WebLogic System Library.



Into this project, I then copied over the Java source and HTML page from my existing Maven project, which compiled and built successfully.

For new applications using these APIs, Eclipse will detect the use of the javax.websocket API and annotations, the javax.json API calls and so forth and present you with a dialog asking you if you want to import the package to the class to resolve the project issues.



 With the application now ready, selecting the Run As > Run on Server menu option launches WebLogic Server, deploys the application and opens an embedded browser instance to access the welcome page of the application.


And there's the test application built in Eclipse using the WebSocket and JSON Processing APIs running against WebLogic Server 12.1.3.


List of PeopleSoft Blogs

Jim Marion - Wed, 2014-07-30 00:49
It has been amazing to watch the exponential growth in the number of PeopleSoft community bloggers. It seems that most of them have links to other PeopleSoft blogs, but where is the master list of PeopleSoft blogs? Here is my attempt to create one. Don't see your blog on the list? Add a comment and I'll review your blog. If your blog is education oriented, then I will put it in the list... and probably delete your comment (that way you don't have to feel awkward about self promotion). There are some PeopleSoft related blogs that I really wanted to include, but they just weren't educational (more marketing than education). I suppose some could say that the Oracle blogs I included were primarily marketing focused. That is true. I included them, however, because those product release announcements are so valuable.
I have not read all of these blogs. I can't, don't, and won't attest to the quality of the content in those blogs. Each reader should evaluate the content of these blog posts before implementing suggestions identified in these blogs and their corresponding comments.

    Developing with the WebSocket and JSON Processing API with WebLogic Server 12.1.3 and Maven

    Steve Button - Tue, 2014-07-29 20:21
    Oracle WebLogic Server 12.1.3 provides full support for Java EE 6 and also adds support for a select set of APIs from Java EE 7.

    The additional APIs are:
    • JSR 356 - Java API for WebSocket 1.0
    • JSR 353 - Java API for JSON Processing
    • JSR 339 - Java API for RESTful Web Services 2.0
    • JSR 338 - Java Persistence API 2.1
    See the "What's New in 12.1.3 Guide" at http://docs.oracle.com/middleware/1213/wls/NOTES/index.html#A1011612131 for more general information.

    At runtime, the WebSocket and JSON Processing APIs are available as defaults and don't require any form of post installation task to be performed to enable their use by deployed applications.

    On the other hand, the JPA and JAX-RS APIs require a step to enable them to be used by deployed applications.

    Developing with the WebSocket and JSON Processing APIs using Maven To create applications with these APIs for use with Oracle WebLogic Server 12.1.3, the API needs to be made available to the development environment.  Typically when developing Java EE 6 applications, the javax:javaee-web-api artifact is used from the following dependency:
    <dependency>
    <groupId>javax</groupId>
    <artifactId>javaee-web-api</artifactId>
    <version>6.0</version>
    <scope>provided</scope>
    </dependency>

    As the WebSocket and JSON Processing APIs are not part of the Java EE 6 API, they need to be added to the project as dependencies.

    The obvious but incorrect way to do this is to change the javax:javaee-web-api dependency to be version 7 so that they are provided as part of that dependency.  This introduces the Java EE 7 API to the application, including APIs such as  Servlet 3.1, EJB 3.2 and so forth which aren't yet supported by WebLogic Server.  Thus it presents the application developer with APIs to use that may not be available on the target server.

    The correct way to add the WebSocket and JSON Processing APIs to the project is to add individual dependencies for each API using their individual published artifacts.

    <dependency>
    <groupId>javax.websocket</groupId>
    <artifactId>javax.websocket-api</artifactId>
    <version>1.0</version>
    <scope>provided</scope>
    </dependency>

    <dependency>
    <groupId>javax.json</groupId>
    <artifactId>javax.json-api</artifactId>
    <version>1.0</version>
    <scope>provided</scope>
    </dependency>

    Using NetBeans, these dependencies can be quickly and correctly added using the code-assist dialog, which presents developers with options for how to resolve any missing classes that have been used in the code.



     Using the JSON Processing API with WebSocket Applications  The JSON Processing API is particularly useful for WebSocket application development since it provides a simple and efficient API for parsing JSON messages into Java objects and for generating JSON from Java objects.  These tasks are very typically performed in WebSocket applications using the Encoder and Decoder interfaces, which provides a mechanism for transforming custom Java objects into WebSocket messages for sending and converting WebSocket messages into Java objects.

    An Encoder converts a Java object into a form able to send as a WebSocket message, typically using JSON as the format for use within Web browser based JavaScript clients.
    package buttso.demo.cursor.websocket;

    import java.util.logging.Level;
    import java.util.logging.Logger;
    import javax.json.Json;
    import javax.json.JsonObject;
    import javax.websocket.EncodeException;
    import javax.websocket.Encoder;
    import javax.websocket.EndpointConfig;

    /**
    * Convert a MouseMessage into a JSON payload.
    *
    * @author sbutton
    */
    public class MouseMessageEncoder implements Encoder.Text{

    private static final Logger logger = Logger.getLogger(MouseMessageEncoder.class.getName());

    @Override
    public String encode(MouseMessage mouseMessage) throws EncodeException {
    logger.log(Level.FINE, mouseMessage.toString());
    JsonObject jsonMouseMessage = Json.createObjectBuilder()
    .add("X", mouseMessage.getX())
    .add("Y", mouseMessage.getY())
    .add("Id", mouseMessage.getId())
    .build();
    logger.log(Level.FINE, jsonMouseMessage.toString());
    return jsonMouseMessage.toString();
    }

    @Override
    public void init(EndpointConfig ec) {
    // throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.
    }

    @Override
    public void destroy() {
    // throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.
    }

    }

    An Decoder takes a String from a WebSocket message and turns it into a custom Java object, typically receiving a JSON payload that has been constructed and sent from a Web browser based JavaScript client.
    package buttso.demo.cursor.websocket;

    import java.io.StringReader;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import javax.json.Json;
    import javax.json.JsonObject;
    import javax.websocket.Decoder;
    import javax.websocket.EndpointConfig;

    /**
    * Converts a JSON payload into a MouseMessage
    *
    * @author sbutton
    */
    public class MouseMessageDecoder implements Decoder.Text {

    private static final Logger logger = Logger.getLogger(MouseMessageDecoder.class.getName());

    @Override
    public MouseMessage decode(String message) {
    logger.log(Level.FINE, message);
    JsonObject jsonMouseMessage = Json.createReader(new StringReader(message)).readObject();
    MouseMessage mouseMessage = new MouseMessage();
    mouseMessage.setX(jsonMouseMessage.getInt("X"));
    mouseMessage.setY(jsonMouseMessage.getInt("Y"));
    logger.log(Level.FINE, mouseMessage.toString());
    return mouseMessage;
    }

    @Override
    public boolean willDecode(String string) {
    return true;
    }

    @Override
    public void init(EndpointConfig ec) {
    // throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.
    }

    @Override
    public void destroy() {
    // throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.
    }

    }

    The Encoder and Decoder implementations are specified as configuration elements on a WebSocket Endpoint (server and/or client) and are automatically invoked to perform the required conversion task.
    @ServerEndpoint(value = "/mouse", decoders = MouseMessageDecoder.class, encoders = MouseMessageEncoder.class)
    public class MouseWebSocket {

    private final Logger logger = Logger.getLogger(MouseWebSocket.class.getName());

    ...

    @OnMessage
    public void onMessage(Session peer, MouseMessage mouseMessage) throws EncodeException {
    logger.log(Level.FINE, "MouseMessage {0} from {1}", new Object[]{mouseMessage, peer.getId()});
    messages.add(mouseMessage);

    for (Session others : peer.getOpenSessions()) {
    try {
    if (!others.getId().equals(peer.getId())) {
    mouseMessage.setId((int) peer.getUserProperties().get("id"));
    }
    others.getBasicRemote().sendObject(mouseMessage);
    } catch (IOException ex) {
    Logger.getLogger(MouseWebSocket.class.getName()).log(Level.SEVERE, null, ex);
    }

    }
    }

    ...
    }
    This example enables MouseMessage objects to be used in the WebSocket ServerEndpoint class to implement the required functionality and allow them to be transmitted in JSON format to and from clients. On the JavaScript client, the JSON representation is used to receive MouseMessages sent from the WebSocket Endpoint and to send MouseMessages to the same WebSocket Endpoint.

    The JavaScript JSON API can be used to produce JSON representation of JavaScript objects as well as parse JSON payloads into JavaScript objects for use by the application code. For example, JavaScript logic can be used to send messages to WebSocket endpoints in JSON form using the JSON.stringify function and to create JavaScript objects from JSON messages received from a WebSocket message using the JSON.parse function.

        ...

    document.onmousemove = function(e) {
    if (tracking) {
    // send current mouse position to websocket in JSON format
    ws.send(JSON.stringify({X: e.pageX, Y: e.pageY}));
    }
    }


    ws.onmessage = function(e) {
    // convert JSON payload into JavaScript object
    mouseMessage = JSON.parse(e.data);

    // create page element using details from received
    // MouseMessage from the WebSocket
    point = document.createElement("div");
    point.style.position = "absolute";
    point.style.zIndex = mouseMessage.Id;
    point.style.left = mouseMessage.X + "px";
    point.style.top = mouseMessage.Y + "px";
    point.style.color = colors[mouseMessage.Id];
    point.innerHTML = "∗";
    document.getElementById("mouser").appendChild(point);
    };
    When running the application, the mouse events are captured from the Web client, send to the WebSocket endpoint in JSON form, converted into MouseMessages, decorated with an ID representing the client the message came from and then broadcast out to any other connect WebSocket client to display.

    A very crude shared-drawing board. 

    Simulatenous drawing in browser windows using WebSocket and JSON Processing API

    Solid Conference San Francisco 2014: Complete Video Compilation

    Surachart Opun - Tue, 2014-07-29 09:17
    Solid Conference focused on the intersection of software and hardware. It's great community with Software and Hardware. Audiences will be able to learn new idea to combine software and hardware. It gathered idea from engineers, researchers, roboticists, artists, founders of startups, and innovators.
    Oreilly launched HD videos (Solid Conference San Francisco 2014: Complete Video Compilation Experience the revolution at the intersection of hardware and software—and imagine the future) for this conference. Video files might huge for download. It will spend much time. Please Use some download manager programs for help.
    After watched, I excited to learn some things new with it (Run times: 36 hours 8 minutes): machines, devices, components and etc.

    Categories: DBA Blogs

    Beta1 of the UnifiedPush Server 1.0.0 released

    Matthias Wessendorf - Tue, 2014-07-29 07:47

    Today we are announcing the first beta release of our 1.0.0 version. After the big overhaul, including a brand new AdminUI with the last release this release contains several enhancements:

    • iOS8 interactive notification support
    • increased APNs payload (2k)
    • Pagination for analytics
    • improved callback for details on actual push delivery
    • optimisations and improvements

    The complete list of included items are avialble on our JIRA instance.

    iOS8 interactive notifications

    Besides the work on the server, we have updated our Java and Node.js sender libraries to support the new iOS8 interactive notification message format.

    If you curious about iOS8 notifications, Corinne Krych has a detailed blog post on it and how to use it with the AeroGear UnifiedPush Server.

    Swift support for iOS

    On the iOS client side Corinne Krych and Christos Vasilakis were also busy starting some Swift work: our iOS registration SDK supports swift on this branch. To give you an idea how it looks, here is some code:

    func application(application: UIApplication!, didRegisterForRemoteNotificationsWithDeviceToken deviceToken: NSData!) {
      // setup registration
      let registration = 
      AGDeviceRegistration(serverURL: NSURL(string: "<# URL of the running AeroGear UnifiedPush Server #>"))
    
        // attemp to register
        registration.registerWithClientInfo({ (clientInfo: AGClientDeviceInformation!) in
            // setup configuration
            clientInfo.deviceToken = deviceToken
            clientInfo.variantID = "<# Variant Id #>"
            clientInfo.variantSecret = "<# Variant Secret #>"
    
            // apply the token, to identify THIS device
            let currentDevice = UIDevice()
    
            // --optional config--
            // set some 'useful' hardware information params
            clientInfo.operatingSystem = currentDevice.systemName
            clientInfo.osVersion = currentDevice.systemVersion
            clientInfo.deviceType = currentDevice.model
            },
    
            success: {
                println("UnifiedPush Server registration succeeded")
            },
            failure: {(error: NSError!) in
                println("failed to register, error: \(error.description)")
            })
    }
    
    Demos

    To get easily started using the UnifiedPush Server we have a bunch of demos, supporting various client platforms:

    • Android
    • Apache Cordova (with jQuery and Angular/Ionic)
    • iOS

    The simple HelloWorld examples are located here. Some more advanced examples, including a Picketlink secured JAX-RS application, as well as a Fabric8 based Proxy, are available here.

    For those of you who that are into Swift, there Swift branches for these demos as well:

    Feedback

    We hope you enjoy the bits and we do appreciate your feedback! Swing by on our mailing list! We are looking forward to hear from you!


    auto-generate SQLAlchemy models

    Catherine Devlin - Mon, 2014-07-28 16:30

    PyOhio gave my lightning talk on ddlgenerator a warm reception, and Brandon Lorenz got me thinking, and PyOhio sprints filled my with py-drenaline, and now ddlgenerator can inspect your data and spit out SQLAlchemy model definitions for you:


    $ cat merovingians.yaml
    -
    name: Clovis I
    reign:
    from: 486
    to: 511
    -
    name: Childebert I
    reign:
    from: 511
    to: 558
    $ ddlgenerator --inserts sqlalchemy merovingians.yaml

    from sqlalchemy import create_engine, Column, Integer, Table, Unicode
    engine = create_engine(r'sqlite:///:memory:')
    metadata = MetaData(bind=engine)

    merovingians = Table('merovingians', metadata,
    Column('name', Unicode(length=12), nullable=False),
    Column('reign_from', Integer(), nullable=False),
    Column('reign_to', Integer(), nullable=False),
    schema=None)

    metadata.create_all()
    conn = engine.connect()
    inserter = merovingians.insert()
    conn.execute(inserter, **{'name': 'Clovis I', 'reign_from': 486, 'reign_to': 511})
    conn.execute(inserter, **{'name': 'Childebert I', 'reign_from': 511, 'reign_to': 558})
    conn.connection.commit()

    Brandon's working on a pull request to provide similar functionality for Django models!

    Four Options For Oracle DBA Tuning Training

    This page has been permanently moved. Please CLICK HERE to be redirected.

    Thanks, Craig.Four Options For Oracle DBA Tuning Training
    Oracle DBAs are constantly solving problems... mysteries. That requires a constant knowledge increase. I received more personal emails from my Oracle DBA Training Options Are Changing posting than ever before. Many of these were from frustrated, angry, and "stuck" DBAs. But in some way, almost all asked the question, "What should I do?"

    In response to the "What should I do?" question, I came up with four types of Oracle DBA performance tuning training that are available today. Here they are:

    Instructor Led Training (ILT) 
    Instructor Led Training (ILT) is the best because you have a personal connection with the teacher. I can't speak for other companies, but I strive to connect with every student and every student knows they can personally email or call me...even years after the training. In fact, I practically beg them to do what we do in class on their production systems and send me the results so I can continue helping them. To me being a great teacher is more than being a great communicator. It's about connection. ILT makes connecting with students easy.

    Content Aggregators
    Content Aggregators are the folks who pull together free content from various sources, organize and display it. Oh yeah... and they profit from it. Sometimes the content value is high, sometimes not. I tend to think of content aggregators like patent trolls, yet many times they can be a great resource. The problem is you're not dealing with the creator of the content. However, the creator of the content actually knows the subject matter. You can somtimes contact them...as I encourage my students and readers to do.

    Content Creators
    Content Creators are the folks who create content based on their experiences. We receive that content through their blogs, videos, conference presentations and sometimes through their training. I am a content creator but with an original, almost child-like curiosity, performance research twist. Content creators rarely directly profit from their posted content, but somehow try to transform it into a revenue stream. I can personally attest, it can be a risky financial strategy...but it's personally very rewarding. Since I love do research, it's easy and enjoyable to post my findings so others may benefit.

    Online Training (OLT)
    Online Training (OLT) is something I have put off for years. The online Oracle training I have seen is mostly complete and total crap. The content is usually technically low and mechanical. The production quality is something a six year old can do on their PC. The teaching quality is ridiculous and the experience puts you to sleep. I do not ever want to be associated with that kind of crowd.

    I was determined to do something different. It had to be the highest quality. I have invested thousands of dollars in time, labor, and equipment to make online video training
    Craig teaching in an OraPub Online Institute Seminarwork. Based on the encouraging feedback I receive it's working!

    This totally caught me by surprise. I have discovered that I can do things through special effects and a highly organized delivery that is impossible to do in a classroom. (Just watch my seminar introductions on YouTube and you'll quickly see what I mean.) This makes the content rich and highly compressed. One hour of OraPub Online Institute training is easily equivalent to two to four hours of classroom training. Easily. I have also strive to keep the price super low, the production at a professional level and ensure the video can be streamed anywhere in the world and on any device. Online training is an option, but you have to search for it.

    Summary
    So there you have it. Because of economics and the devaluation of DBAs as human beings coupled with new technologies, the Oracle DBA still has at least four main sources of training and knowledge expansion. Don't give up learning!

    Some of you reading may be surprised that I'm writing about this topic because it will hurt my traditional instructor led training (public or on-site) classes. I don't think so. If people can attend my classes in person, they will. Otherwise, I hope they will register for an OraPub Online Institute seminar. Or, at least subscribe to my blog (see upper left of page).

    All the best in your quest to do great work,

    Craig.
    Categories: DBA Blogs

    Silence

    Greg Pavlik - Sat, 2014-07-26 11:26
    Silence. Sometimes sought after, but in reality almost certainly feared - the absence of not just sound but voice. Silence is often associated with divine encounter - the neptic tradition of the Philokalia comes to mind - but also and perhaps more accurately with abandonment, divine or otherwise. I recently read Shusaku Endo's Silence, a remarkable work, dwelling on the theme of abandonment in the context of the extirpation of Kakure Kirishitan communities in Tokagawa Japan. Many resilient families survived and eventually came out of hiding in the liberalization in the mid-19th century, but the persecutions were terrible. Their story is deeply moving (sufficiently so that over time I find myself drawn to devotion to the image of Maria-Kannon). Endo's novel was not without controversy but remains one of the great literary accomplishments of the 20th century.

    In fact, the reason for this post is a kind of double entendre on silence: the relative silence in literate western circles with respect to Japanese literature of the past century. Over the last month, I realized that virtually no one I had spoken with had read a single Japanese novel. Yet, like Russia of the 19th century, Japan produced a concentration of great writers and great novelists in the last 20th century that is set apart: the forces of of profound national changes (and defeat) created the crucible of great art. That art carries the distinctive aesthetic sense of Japan - a kind of openness of form, but is necessarily the carrier of universal, humanistic themes.

    Endo is a writer in the post war period - the so-called third generation, and in my view the last of the wave of great Japanese literature. Read him. But don't stop - perhaps don't start - there. The early 20th century work of Natsume Soseki are a product of the Meiji period. In my view, Soseki is not only a father of Japenese literature but one of the greatest figures of world literature taken as a whole - I am a Cat remains one of my very favorite novels. Two troubling post-war novels by Yukio Mishima merit attention - Confessions of a Mask and the Sailor Who Fell From Grace with the Sea, both I would characterize broadly as existential masterpieces. The topic of identity in the face of westernization is also a moving theme in Osamu Dazai's No Longer Human. I hardly mean this as a complete survey - something in any case I am not qualified to provide -just a pointer toward something broader and important.

    My encounter with contemporary Japanese literature - albeit limited - has been less impactful (I want to like Haruki Murakami in the same way I want to like Victor Pelevin, but both make me think of the distorted echo of something far better). And again like Russia, it is difficult to know what to make of Japan today - where its future will lead, whether it will see a cultural resurgence or decline. It is certain that its roots are deep and I hope she finds a way to draw on them and to flourish.


    SYNC 2014 !

    Bas Klaassen - Thu, 2014-07-24 13:15
    Vanuit Proact organiseren wij het kennisplatform SYNC 2014 op 17 september in de Rotterdam Cruise Terminal. Alle hedendaagse IT-infrastructuurontwikkelingen in 1 dag: • Een interactief programma o.l.v. dagvoorzitter Lars Sørensen o.a. bekend van BNR • Een keynote van Marco Gianotten van Giarte, de Nederlandse “Gartner” op het gebied van Outsoucing/Managed Services • Huisman Equipment over de Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com3
    Categories: APPS Blogs

    Unlimited Session Timeout

    Jim Marion - Thu, 2014-07-24 11:21

    There are a lot of security admins out there that are going to hate me for this post. There are a lot of system administrators, developers, and users, however, that will LOVE me for this post. The code I'm about to share with you will keep the logged in PeopleSoft user's session active as long as the user has a browser window open that points to a PeopleSoft instance. Why would you do this? I can think of two reasons:

    • Your users have several PeopleSoft browser windows open. If one of them times out because of inactivity at the browser window level, then it will kill the session for ALL open windows. That just seems wrong.
    • Your users have long running tasks, such as completing performance reviews, that may require more time to complete than is available at a single sitting. For example, imagine you are preparing a performance review and you have to leave for a meeting. You don't have enough information in the transaction to save, but you can't be late for the meeting either. You know if you leave, your session will time out while you are gone and you will lose your work. This also seems wrong.

    Before I show you how to keep the logged in user's session active, let's talk about security... Session timeouts exist for two reasons (at least two):

    • Security: no one is home, so lock the door
    • Server side resource cleanup: PeopleSoft components require web server state. Each logged in user session (and browser window) consumes resources on the web server. If the user is dormant for a specific period of time, reclaim those resources by killing the user's session.

    We can "lock the door" without timing out the server side session with strong policies on the workstation: password protected screen savers, etc.

    So here is how it works. Add the following JavaScript to the end of the HTML definition PT_COMMON (or PT_COPYURL if using an older version of PeopleTools) (or even better, if you are on PeopleTools 8.54+, use component and/or role based branding to activate this script). Next, turn down your web profile's timeout warning and timeout to something like 3 and 5 minutes or 5 and 10 minutes. On the timeout warning interval, the user's browser will place an Ajax request to keep the session active. When the user closes all browser windows, the reset won't happen so the user's server side session state will terminate.

    What values should you use for the warning and timeout? As low as possible, but not so low you create too much network chatter. If the browser makes an ajax request on the warning interval and a user has 10 windows open, then that means the user will trigger up to 10 Ajax requests within the warning interval window. Now multiply that by the number of logged in users at any given moment. See how this could add up?

    Here is the JavaScript:

    (function (root) {
    // xhr adapted from http://toddmotto.com/writing-a-standalone-ajax-xhr-javascript-micro-library/
    var xhr = function (type, url, data) {
    var methods = {
    success: function () {
    },
    error: function () {
    }
    };

    var parse = function (req) {
    var result;
    try {
    result = JSON.parse(req.responseText);
    } catch (e) {
    result = req.responseText;
    }
    return [result, req];
    };

    var XHR = root.XMLHttpRequest || ActiveXObject;
    var request = new XHR('MSXML2.XMLHTTP.3.0');
    request.open(type, url, true);
    request.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
    request.onreadystatechange = function () {
    if (request.readyState === 4) {
    if (request.status === 200) {
    methods.success.apply(methods, parse(request));
    } else {
    methods.error.apply(methods, parse(request));
    }
    }
    };

    request.send(data);
    return {
    success: function (callback) {
    methods.success = callback;
    return methods;
    },
    error: function (callback) {
    methods.error = callback;
    return methods;
    }
    };
    }; // END xhr


    var timeoutIntervalId;
    var resetUrl;

    /* replace warning message timeout with Ajax call
    *
    * clear old timeout after 30 seconds
    * macs don't set timeout until 1000 ms
    */
    root.setTimeout(function () {
    /* some pages don't have timeouts defined */
    if (typeof (timeOutURL) !== "undefined") {
    if (timeOutURL.length > 0) {
    resetUrl = timeOutURL.replace(/expire$/, "resettimeout");
    if (totalTimeoutMilliseconds !== null) {
    root.clearTimeout(timeoutWarningID);
    root.clearTimeout(timeoutID);

    timeoutIntervalId =
    root.setInterval(resetTimeout /* defined below */,
    root.warningTimeoutMilliseconds);
    }
    }
    }
    }, 30000);

    var resetTimeout = function () {
    xhr("GET", resetUrl)
    .success(function (msg) {
    /* do nothing */
    })
    .error(function (xhr, errMsg, exception) {
    alert("failed to reset timeout");
    /* error; fallback to delivered method */
    (root.setupTimeout || root.setTimeout2)();
    });
    };
    }(window));

    A special "shout out" to Todd Motto for his Standalone Ajax/XHR JavaScript micro-library which is embedded (albeit modified) in the JavaScript above.

    AWR Warehouse

    Asif Momen - Wed, 2014-07-23 21:10
    AWR Warehouse is a central repository configured for long term AWR data retention. It stores AWR snapshots from multiple database sources. Increasing AWR retention in the production systems would typically increase overhead and cost of mission critical databases. Hence, offloading the AWR snapshots to a central repository is a better idea. Unlike AWR retention period of default 8 days, the AWR Warehouse default retention period is "forever". However, it is configurable for weeks, months, or years. 

    For more information on AWR Warehouse click on the following link for a video tutorial. 

    http://www.youtube.com/watch?v=StydMitHtuI&feature=youtu.be

    My Oracle Support Community Enhancement Brings New Features

    Joshua Solomin - Wed, 2014-07-23 18:33
    Untitled Document

     

    GPIcon


    Be sure to visit our My Oracle Support Community Information Center to see what is new. Choose from the tabs to watch the How to Video Series. You can also enroll for a live webcast on Wednesday, August 6 at 9am PST.

    One change, you can now read blogs in My Oracle Support Community. The new Support Blogs space provides access to Support related blogs. The My Oracle Support Blog provides posts on the portal and tools that span all product areas.

    Support Blogs also allow you to stay in touch with the latest product-specific news, tools, and troubleshooting tips in a growing list of product blogs maintained by Support engineers. Check back frequently to read new posts and discover new blogs.

    Spark: A Discussion

    Greg Pavlik - Wed, 2014-07-23 09:36
    A great presentation, worth watching in its entirety.

    With apologies to my Hadoop friends but this is good for you too.

    The Customer Experience

    Steve Karam - Wed, 2014-07-23 08:00
    The Apple Experience

    I’m going to kick this post off by taking sides in a long-standing feud.

    Apple is amazing.

    There. Edgy, right? Okay, so maybe you don’t agree with me, but you have to admit that a whole lot of people do. Why is that?

    NOT part of the Customer Experience. Image from AppleFanSite.comImage from AppleFanSite.comSure, there’s the snarky few that believe Apple products are successful due to an army of hipsters with thousands in disposable income, growing thick beards and wearing skinny jeans with pipes in mouth and books by Jack Kerouac in hand, sipping lattes while furiously banging away on the chiclet keyboard of their Macbook Pro with the blunt corner of an iPad Air that sports a case made of iPhones. I have to admit, it does make for an amusing thought. And 15 minutes at a Starbucks in SoHo might make you feel like that’s absolutely the case. But it’s not.

    If you browse message boards or other sites that compare PCs and Apple products, you’ll frequently see people wondering why someone would buy a $2,000 Macbook when you can have an amazing Windows 8.1 laptop with better specs for a little over half the price. Or why buy an iPad when you can buy a Samsung tablet running the latest Android which provides more freedom to tinker. Or why even mess with Apple products at all when they’re not compatible with Fragfest 5000 FPS of Duty, or whatever games those darn kids are playing these days.

    Part of the Customer Experience. Image provided by cnet.comImage from cnet.comThe answer is, of course, customer experience. Apple has it. When you watch a visually stunning Apple commercial, complete with crying grandpas Facetiming with their newborn great-grandson and classrooms of kids typing on Macbook Airs, you know what to expect. When you make the decision to buy said Macbook Air, you know that you will head to the Apple Store, usually in the posh mall in your town, and that it will be packed to the gills with people buzzing around looking at cases and Beats headphones and 27″ iMacs. You know that whatever you buy will come in a sleek white box, and will be placed into a thick, durable bag with two drawstring cords that you can wear like a backpack.

    When you get it home and open the box, it’s like looking at a Tesla Model S. Your new laptop, situated inside a silky plastic bed and covered in durable plastic with little tabs to peel it off. The sleek black cardboard wrapped around a cable wound so perfectly that there’s not a single millimeter of space between the coils, nor a plug out of place. The laptop itself will be unibody, no gaps for fans or jiggly CD-ROM trays or harsh textures.

    All of which is to say, Apple provides an amazing customer experience. Are their products expensive, sometimes ridiculously so? Of course. But people aren’t just buying into the product, they’re buying into the “Apple life.” And why not? I’d rather pay for experiences than products any day. I may be able to get another laptop with better specs than my Macbook Pro Retina, but there will always be something missing. Not the same Customer Experience.Maybe the screen resolution isn’t quite so good, maybe the battery doesn’t last as long, or maybe it’s something as simple as the power cord coming wrapped in wire bag ties with a brick the size of my head stuffed unceremoniously into a plastic bag. The experience just isn’t there, and I feel like I’ve bought something that’s not as magnificent as the money I put into it, features and specs be damned.

    Customer experience isn’t just a buzz phrase, and it doesn’t just apply to how you deal with angry customers or how you talk to them while making a sale. It also doesn’t mean giving your customer everything they want. Customer experience is the journey from start to finish. It’s providing a predictable, customer-centric, and enjoyable experience for a customer that is entrusting their hard-earned cash in your product. And it applies to every business, not just retail computer sellers and coffee shops. What’s more, it applies to anyone in a service-oriented job.

    Customer Experience for IT Professionals

    In a previous post I mentioned how important it is to know your client. Even if your position is Sub-DBA In Charge of Dropping Indexes That Start With The Letter Z, you still have a customer (Sub-DBA In Charge Of Dropping Indexes That Start With The Letters N-Z, of course). Not just your boss, but the business that is counting on you to do your job in order to make a profit. And you may provide an exceptional level of service. Perhaps you spend countless hours whittling away at explain plans until a five page Cognos query is as pure as the driven snow and runs in the millisecond range. But it’s not just what you do, but how you do it that is important.

    I want you to try something. And if you already do this, good on you. Next time you get a phone call request from someone at your work, or have a phone meeting, or someone sends you a chat asking you to do something, I want you to send a brief email back (we call this an “ack” in technical terms) that acknowledges their request, re-lists what they need in your own words (and preferably with bullets), and lists any additional requirements or caveats. Also let them know how long it will take. Make sure you don’t underestimate, it’s better to quote too much time and get it to them early. Once you’ve finished the work, write a recap email. “As we discussed,” you might say, “I have created the five hundred gazillion tables you need and renamed the table PBRDNY13 to PBRDNY13X.” Adding, of course, “Please let me know if you have any other requests.”

    If the task you did involves a new connection, provide them the details (maybe even in the form of a TNSNAMES). If there are unanswered questions, spell them out. If you have an idea that could make the whole process easier next time, run it by them. Provide that level of experience on at least one task you accomplish for your customer if you do not already, and let me know if it had any impact that you can tell. Now do it consistently.

    The Apple ExperienceFrom what I’ve seen, this is what separates the “workers” from the “rockstars.” It’s not the ability to fix problems faster than a speeding bullet (though that helps, as a service that sells itself), but the ability to properly communicate the process and give people a good expectation that they can count on.

    There’s a lot more to it than that, I know. And some of you may say that you lack the time to have this level of care for every request that comes your way. Perhaps you’re right, or perhaps you’re suffering from IT Stockholm Syndrome. Either way, just give it a shot. I bet it will make a difference, at least most of the time.

    Conclusion

    Recently, I became the Director of Customer Education and Experience at Delphix, a job that I am deeply honored to have. Delphix is absolutely a product that arouses within customers an eager want, it solves complex business problems, has an amazing delivery infrastructure in the Professional Services team, and provides top notch support thereafter. A solid recipe for Customer Experience if there ever was one. But it’s not just about the taste of the meal, it’s about presentation as well. And so it is my goal to continuously build an industrialized, scalable, repeatable, and enjoyable experience for those who decide to invest their dollar on what I believe to be an amazing product. Simply put, I want to impart on them the same enthusiasm and confidence in our product that I have.

    I hope you have the chance to do the same for your product, whatever it may be.

    The post The Customer Experience appeared first on Oracle Alchemist.

    Teradata bought Hadapt and Revelytix

    Curt Monash - Wed, 2014-07-23 03:29

    My client Teradata bought my (former) clients Revelytix and Hadapt.* Obviously, I’m in confidentiality up to my eyeballs. That said — Teradata truly doesn’t know what it’s going to do with those acquisitions yet. Indeed, the acquisitions are too new for Teradata to have fully reviewed the code and so on, let alone made strategic decisions informed by that review. So while this is just a guess, I conjecture Teradata won’t say anything concrete until at least September, although I do expect some kind of stated direction in time for its October user conference.

    *I love my business, but it does have one distressing aspect, namely the combination of subscription pricing and customer churn. When your customers transform really quickly, or even go out of existence, so sometimes does their reliance on you.

    I’ve written extensively about Hadapt, but to review:

    • The HadoopDB project was started by Dan Abadi and two grad students.
    • HadoopDB tied a bunch of PostgreSQL instances together with Hadoop MapReduce. Lab benchmarks suggested it was more performant than the coyly named DBx (where x=2), but not necessarily competitive with top analytic RDBMS.
    • Hadapt was formed to commercialize HadoopDB.
    • After some fits and starts, Hadapt was a Cambridge-based company. Former Vertica CEO Chris Lynch invested even before he was a VC, and became an active chairman. Not coincidentally, Hadapt had a bunch of Vertica folks.
    • Hadapt decided to stick with row-based PostgreSQL, Dan Abadi’s previous columnar enthusiasm notwithstanding. Not coincidentally, Hadapt’s performance never blew anyone away.
    • Especially after the announcement of Cloudera Impala, Hadapt’s SQL-on-Hadoop positioning didn’t work out. Indeed, Hadapt laid off most or all of its sales and marketing folks. Hadapt pivoted to emphasize its schema-on-need story.
    • Chris Lynch, who generally seems to think that IT vendors are created to be sold, shopped Hadapt aggressively.

    As for what Teradata should do with Hadapt:

    • My initial thought for Hadapt was to just double down, pushing the technology forward, presumably including a columnar option such as the one Citus Data developed.
    • But upon reflection, if it made technical sense to merge the Aster and Hadapt products, that would be better yet.

    I herewith apologize to Aster co-founder and Hadapt skeptic Tasso Argyros (who by the way has moved on from Teradata) for even suggesting such heresy. :)

    Complicating the story further:

    • Impala lets you treat data in HDFS (Hadoop Distributed File System) as if it were in a SQL DBMS. So does Teradata SQL-H. But Hadapt makes you decide whether the data is in HDFS or the SQL DBMS, and it can’t be in both at once. Edit: Actually, see Dan Abadi’s comments below.
    • Impala and Oracle’s new SQL-H competitor have daemons running on every data node. So does one option in Hadapt. But I don’t think SQL-H does that yet.

    I was less involved with Revelytix that with Hadapt (although I’m told I served as the “catalyst” for the original Teradata/Revelytix partnership). That said, Teradata — like Oracle — is always building out a data integration suite to cover a limited universe of data stores. And Revelytix’ dataset management technology is a nice piece toward an integrated data catalog.

    Related posts

    EID Holidays and things to do

    Syed Jaffar - Wed, 2014-07-23 03:07
    Looking forward to a much anticipated 9 day EID holiday break to complete the to-do-list which I have been carrying for a while now. Determined to complete some of the writing assignments that I have kept pending for a long period of time now. At the same time, will have to seek the possibilities to exploring the new features of v12.1.0.2 and Exadata as we might we going for the combination in the coming weeks for a Data Warehouse project.

    Will surely blog about my test scenarios and will share the inputs on Oracle 12c new features.

    I wish everyone a very happy and prosperous EID in advance.

    Pages

    Subscribe to Oracle FAQ aggregator