Feed aggregator

Approaches to Consider for Your Organization’s Windchill Consolidation Project

This post comes from Fishbowl Solutions’ Senior Solutions Architect, Seth Richter.

More and more organizations need to merge multiple Windchill instances into a single Windchill instance after either acquiring another company or maybe had separate Windchill implementations based on old divisional borders. Whatever the situation, these organizations want to merge into a single Windchill instance to gain efficiencies and/or other benefits.

The first task for a company in this situation is to assemble the right team and develop the right plan. The team will need to understand the budget and begin to document key requirements and its implications. Will they hire an experienced partner like Fishbowl Solutions? If so, we recommend involving the partner early on in the process so they can help navigate the key decisions, avoid pitfalls and develop the best approach for success.

Once you start evaluating the technical process and tools to merge the Windchill instances, the most likely options are:

1. Manual Method

Moving data from one Windchill system to another manually is always an option. This method might be viable if there are small pockets of data to move in an ad-hoc manner. However, this method is extremely time consuming so proceed with caution…if you get halfway through and then move to a following method then you might have hurt the process rather than help it.

2. Third Party Tools (Fishbowl Solutions LinkExtract & LinkLoader tools)

This process can be a cost effective alternative, but it is not as robust as the Windchill Bulk Migrator so your requirements might dictate if this is viable or not.

3. PTC Windchill Bulk Migrator (WBM) tool

This is a powerful, complex tool that works great if you have an experienced team running it. Fishbowl prefers the PTC Windchill Bulk Migrator in many situations because it can complete large merge projects over a weekend and historical versions are also included in the process.

A recent Fishbowl project involved a billion-dollar manufacturing company who had acquired another business and needed to consolidate CAD data from one Windchill system into their own. The project had an aggressive timeline because it needed to be completed before the company’s seasonal rush (and also be prepared for an ERP integration). During the three-month project window, we kicked off the project, executed all of the test migrations and validations, scheduled a ‘go live’ date, and then completed the final production migration over a weekend. Users at the acquired company checked their data into their “old” Windchill system on a Friday and were able check their data out of the main corporate instance on Monday with zero engineer downtime.

Fishbowl Solutions’ PTC/PLM team has completed many Windchill merge projects such as this one. The unique advantage of working with Fishbowl is that we are  PTC Software Partners and Windchill programming experts. Often times, when other reseller/consulting partners get stuck waiting on PTC technical support, Fishbowl has been able to problem solve and keep projects on time and on budget.

If your organization is seeking to find an effective and efficient way to bulk load data from one Windchill system to another, our experts at Fishbowl Solutions are able to accomplish this on time and on budget. Urgency is a priority in these circumstances, and we want to ensure you’re able to make this transition process as hassle-free as possible with no downtime. Not sure which tool is the best fit for your Windchill migration project? Check out our website, click the “Contact Us” tab, or reach out to Rick Passolt in our business development department for more information or to request a demo.

Contact Us

Rick Passolt
Senior Account Executive
952.456.3418
mcadsales@fishbowlsolutions.com

Seth Richter is a Senior Solutions Architect at Fishbowl Solutions. Fishbowl Solutions was founded in 1999. Their areas of expertise include Oracle WebCenter, PTC’s Product Development System (PDS), and enterprise search solutions using the Google Search Appliance. Check out our website to learn more about what we do.

The post Approaches to Consider for Your Organization’s Windchill Consolidation Project appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Momentum16 – Day 1 – InfoArchive first approach

Yann Neuhaus - Tue, 2016-11-01 12:04

As Gérard explained in its first blog today was the first day not specific to the partners. I had the opportunity to attend some business centric (not really) interesting sessions in the morning. Then the morning ended and the afternoon begun with two keynotes: “Dell EMC Opening Keynote” and “Digital Transformation Keynote”. Finally I was able to attend a hands on session on InfoArchive and that’s what I will talk about in this blog since that’s the only piece of technical information I was able to get today.

 

Like every other events, there are exhibitions and exhibitors that are showing what they are doing around EMC in their booths. Of course there is also a booth regarding the InfoArchive solution if you want to talk to some EMC experts and I think that’s a pretty good way to see and understand what this solution is doing.

 

EMC InfoArchive is a unified enterprise archiving platform that stores related structured data and unstructured content in a single consolidated repository. This product enables corporations to preserve the value of enterprise information in a single, compliant, and easily accessible unified archive. Basically, that’s a place where you can store your content to be archived on a low price storage because this kind of information is usually kept only for legal constraints (read only) and don’t need to be accessed very often.

 

InfoArchive is composed of three components: an included Web Server, a Server (core of the application) and finally a Database (it is using an Xhive Database (XML), just like xPlore). Therefore you can very easily provide an XML file that will be used as an import file and that contains content to be archived by InfoArchive. Basically everything that can be transformed to an XML format (metadata/content) can be put inside InfoArchive. This solution provides some default connectors like:

  • Documentum
  • SharePoint (can archive documents and/or complete sites)
  • SAP

 

These default connectors are great but if that’s not enough, then you can just define your own with the information that you want to store and how you want to index them, transform them, aso… And of course this is defined in XML files. At the moment, this configuration can be a little bit scary since it is all done manually but I heard that a GUI configuration might be coming soon if it’s not in the version 4.2 already? InfoArchive is apparently fully web-based and  therefore based on a discussion I had with an EMC colleague, it should technically be possible to archive all the content of an SharePoint Site for example and then accessing this content from Documentum or any other location as long as it is using web-based requests to query the InfoArchive.

 

During the hands on session (first time working with InfoArchive for me), I had to create a new application/holding that can be used to archive Tweets. At the end of the one and a half hour, I had successfully created my application and I was able to search for Tweets based on their creationDate, userName, hashTags, retweetCount, aso… That was done actually pretty easily by following the help guide provided by EMC (specific to this use case) but if you don’t have this help guide, you better be an InfoArchive expert because you need to know each and every one of the XML tags that need to be added and where to add them to get something working properly.

 

See you tomorrow for the next blog with hopefully more technical stuff to share.

 

Cet article Momentum16 – Day 1 – InfoArchive first approach est apparu en premier sur Blog dbi services.

Momentum16 – Day1 – Feelings

Yann Neuhaus - Tue, 2016-11-01 12:00

This first day at Momentum 2016

Normally I should write the second one as we started yesterday with a partner session where we got some information. One of these news was that EMC had more than 400 partners a few years ago and today this has been reduced to less than 80 and dbi services is still one of them. For us it is a good news, I hope this is also one for our current and future customers.

 

Today the different sessions, a part of the key notes hold by Rohit Ghai, were more related to customer experience, solutions ECD partners can propose, business presentations, description of particular challenges that companies had to face and how they dealt with it without presenting technical details.
As I am more on the technical side, this was more for my culture, I would say.

 

In the keynote we learned that with Documentum 7.3 cost saving will increase. For instance, PostgreSQL can be used with Document 7.3, the upgrade will be faster, aso… Since time is money…
PostgreSQL can be an interesting subject as dbi services is also active in this database and I will have to work with our DB experts to see what we have to test, how and find out the pro and cons using PostgreSQL on a technical point of view, as for sure the license cost will decrease. I planned, no I have, to go to the technical session tomorrow about “What’s new in Documentum 7.3″.

 

I also took the opportunity to discuss with some Dell EMC partners to learn more about the solutions they propose. For instance I was able to talk with Neotys people to understand what their product can bring us compared to JMeter or LoadRunner which we or our customers are using to do the load tests. Having a better view of possible solutions in this area can help me in case some customers have specific requirements to help him choose the best tool.
I also had a chat with Aerow and they showed me how ARender4Documentum is working and how fast “big” documents can be displayed in their html5 viewer. So even if the fist day cannot be viewed as a technical day, I actually learned a lot.
In this kind of event, what I find cool too, is that you can meet people, for instance at lunch time around a table and start talking about your/their experiences, your/their concerns, solutions, aso… So today we had a talk about cloud (private, public) and what does this means in case you have a validated system.

 

So let’s see what will happen tomorrow, the day where more technical information will be shared.

Note: Read Morgan’s blog where you can find technical stuff. You know I felt Morgan frustrated today as he could not “eat” technical food :-)

 

Cet article Momentum16 – Day1 – Feelings est apparu en premier sur Blog dbi services.

Oracle Positioned as a Leader in the Gartner Magic Quadrant for Horizontal Portals, 2016

WebCenter Team - Tue, 2016-11-01 11:44

Summary

Consumerization, convergence, continuously evolving technology and a shift toward business influence are changing the horizontal portal market profoundly. Leaders of portal and other digital experience initiatives face more complex and more consequential decisions.

Market Definition/Description

Gartner defines "portal" as a personalized point of access to relevant information, business processes and other people. Portals address various audiences, including employees, customers, partners and citizens, and support a wide range of vertical markets and business activities. As a product, a horizontal portal is a software application or service used to create and manage portals for a wide range of purposes.

The requirements of digital business are driving waves of innovation and drawing new vendors into the portal market. The evolved landscape is increasingly populated by vendors eschewing traditional portal standards and practices in favor of more flexible, leaner and lighter-weight technology. Vendors with roots in areas adjacent to the portal market, especially web content management (WCM), increasingly offer capability suitable for portal use cases.

Vendor revenue in the portal and digital engagement technologies market declined more than 5% between 2014 and 2015, when estimated revenue was at about $1.64 billion. But Gartner expects a revenue resurgence as organizations see the need to expand and improve portal capabilities as an essential part of broader digital experience initiatives. As a result, Gartner expects the market for portal and digital engagement technologies to grow at a 2.83% compound annual growth rate (CAGR) between 2015 and 2016, then to rebound to a healthier growth rate of about 5% over the next five years (see "Forecast: Enterprise Software Markets Worldwide, 2013-2020, 3Q16 Update" ).

Figure 1. Magic Quadrant for Horizontal Portals

Source: Gartner (2016)

Oracle was positioned as a Leader in the Gartner Magic Quadrant for Horizontal Portals for its Oracle WebCenter Portal offering. 

Oracle WebCenter Portal is a portal and composite applications solution that delivers intuitive user experiences for the enterprise that are seamlessly integrated with enterprise applications. Oracle WebCenter Portal optimizes the connections between people, information and applications, provides business activity streams so users can navigate, discover and access content in context, and offers dynamic personalization of applications, portals and sites so users have a customized experience.

With social, mobile and analytics driving the next wave of digital innovation, businesses require that portals provide intuitive yet personalized user experiences with integrated social, collaboration and content management capabilities. Oracle WebCenter Portal is the complete, open and integrated enterprise portal and composite applications solution that enables the development and deployment of internal and external portals and websites, composite applications, self-service portals and mash-ups with integrated social and collaboration services and enterprise content management capabilities.

With Oracle WebCenter Portal, organizations can:

  • Improve business productivity by providing employees, customers and partners with a modern user experience to access contextual information in a rich, personalized and collaborative environment.
  • Speed development by providing developers with a comprehensive and flexible user experience platform that includes an extensive library of reusable components.
  • Increase business agility by extending and integrating their existing SaaS and on-premise applications such as Oracle Marketing Cloud, Oracle Sales Cloud, Oracle E-Business Suite; Siebel, PeopleSoft, and JD Edwards; and SAP seamlessly.

Oracle is pleased to be named a Leader in the 2016 Gartner Magic Quadrant for Horizontal Portals. You can access the full report here.

Source: "Gartner Magic Quadrant for Horizontal Portals", Jim Murphy, Gene Phifer, Gavin Tay, Magnus Revang, 17 October 2016.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Oracle.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

More Information

The full report can be found here: http://www.gartner.com/doc/reprints?id=1-3KSXYZ1&ct=161027&st=sb 

Using SQL to Query JSON Files with Apache Drill

Rittman Mead Consulting - Tue, 2016-11-01 09:49

I wrote recently about what Apache Drill is, and how to use it with OBIEE. In this post I wanted to demonstrate its great power in action for a requirement that came up recently. We wanted to analyse our blog traffic, broken down by blog author. Whilst we have Google Analytics to provide the traffic, it doesn't include the blog author. This is held within the blog platform, which is Ghost. The common field between the two datasets is the post "slug". From Ghost we could get a dump of the data in JSON format. We needed to find a quick way to analyse and extract from this JSON a list of post slugs and associated author.

One option would be to load the JSON into a RDBMS and process it from within there, running SQL queries to extract the data required. For a long-term large-scale solution, maybe this would be appropriate. But all we wanted to do here was query a single file, initially just as a one-off. Enter Apache Drill. Drill can run on a single laptop (or massively clustered, if you need it). It provides a SQL engine on top of various data sources, including text data on local or distributed file systems (such as HDFS).

You can use Drill to dive straight into the json:

0: jdbc:drill:zk=local> use dfs;
+-------+----------------------------------+
|  ok   |             summary              |
+-------+----------------------------------+
| true  | Default schema changed to [dfs]  |
+-------+----------------------------------+
1 row selected (0.076 seconds)
0: jdbc:drill:zk=local> select * from `/Users/rmoff/Downloads/rittman-mead.ghost.2016-11-01.json` limit 1;
+----+
| db |
+----+
| [{"meta":{"exported_on":1478002781679,"version":"009"},"data":{"permissions":[{"id":1,"uuid":"3b24011e-4ad5-42ed-8087-28688af7d362","name":"Export database","object_type":"db","action_type":"exportContent","created_at":"2016-05-23T11:24:47.000Z","created_by":1,"updated_at":"2016-05-23T11:24:47.000Z","updated_by":1},{"id":2,"uuid":"55b92b4a-9db5-4c7f-8fba-8065c1b4b7d8","name":"Import database","object_type":"db","action_type":"importContent","created_at":"2016-05-23T11:24:47.000Z","created_by":1,"updated_at":"2016-05-23T11:24:47.000Z","updated_by":1},{"id":3,"uuid":"df98f338-5d8c-4683-8ac7-fa94dd43d2f1","name":"Delete all content","object_type":"db","action_type":"deleteAllContent","created_at":"2016-05-23T11:24:47.000Z","created_by":1,"updated_at":"2016-05-23T11:24:47.000Z","updated_by":1},{"id":4,"uuid":"a3b8c5c7-7d78-442f-860b-1cea139e1dfc","name":"Send mail","object_type":"mail","action_

But from this we can see the JSON object is a single column db of array type. Let's take a brief detour into one of my favourite commandline tools - jq. This let's you format, filter, and extract values from JSON. Here we can use it to get an idea of how the data's structured. We can do this in Drill, but jq gives us a headstart:

We can see that under the db array are two elements; meta and data. Let's take meta as a simple example to expose through Drill, and then build from there into the user data that we're actually after.

Since the root data element (db) is an array, we need to FLATTEN it:

0: jdbc:drill:zk=local> select flatten(db) from `/Users/rmoff/Downloads/rittman-mead.ghost.2016-11-01.json` limit 1;
+--------+
| EXPR$0 |
+--------+
| {"meta":{"exported_on":1478002781679,"version":"009"},"data":{"permissions":[{"id":1,"uuid":"3b24011e-4ad5-42ed-8087-28688af7d362","name":"Export database","object_type":"db","action_type":"exportContent","created_at":"2016-05-23T11:24:47.000Z","created_by":1,"updated_at":"2016-05-23T11:24:47.000Z","updated_by":1},{"id":2,"uuid":"55b92b4a-9db5-4c7f-8fba-8065c1b4b7d8","name":"Import database","object_type":"db","action_type":"importContent","created_at":"2016-05-23T11:24:47.000Z","created_by":1,"updated_at":"2016-05-23T11:24:47.000Z","u

Now let's query the meta element itself:

0: jdbc:drill:zk=local> with db as (select flatten(db) from `/Users/rmoff/Downloads/rittman-mead.ghost.2016-11-01.json`) select db.meta from db limit 1;
Nov 01, 2016 2:18:31 PM org.apache.calcite.sql.validate.SqlValidatorException <init>
SEVERE: org.apache.calcite.sql.validate.SqlValidatorException: Column 'meta' not found in table 'db'
Nov 01, 2016 2:18:31 PM org.apache.calcite.runtime.CalciteException <init>
SEVERE: org.apache.calcite.runtime.CalciteContextException: From line 1, column 108 to line 1, column 111: Column 'meta' not found in table 'db'
Error: VALIDATION ERROR: From line 1, column 108 to line 1, column 111: Column 'meta' not found in table 'db'

SQL Query null

[Error Id: 9cb4aa98-d522-42bb-bd69-43bc3101b40e on 192.168.10.72:31010] (state=,code=0)

This didn't work, because if you look closely at the above FLATTEN, the resulting column is called EXPR$0, so we need to alias it in order to be able to reference it:

0: jdbc:drill:zk=local> select flatten(db) as db from `/Users/rmoff/Downloads/rittman-mead.ghost.2016-11-01.json`;
+----+
| db |
+----+
| {"meta":{"exported_on":1478002781679,"version":"009"},"data":{"permissions":[{"id":1,"uuid":"3b24011e-4ad5-42ed-8087-28688af7d362","name":"Export database","object_type":"db","action_type":"exportConten

Having done this, I'll put the FLATTEN query as a subquery using the WITH syntax, and from that SELECT just the meta elements:

0: jdbc:drill:zk=local> with ghost as (select flatten(db) as db from `/Users/rmoff/Downloads/rittman-mead.ghost.2016-11-01.json`) select ghost.db.meta from ghost limit 1;
+------------------------------------------------+
|                     EXPR$0                     |
+------------------------------------------------+
| {"exported_on":1478002781679,"version":"009"}  |
+------------------------------------------------+
1 row selected (0.317 seconds)

Note that the column is EXPR$0 because we've not defined a name for it. Let's fix that:

0: jdbc:drill:zk=local> with ghost as (select flatten(db) as db from `/Users/rmoff/Downloads/rittman-mead.ghost.2016-11-01.json`) select ghost.db.meta as meta from ghost limit 1;
+------------------------------------------------+
|                      meta                      |
+------------------------------------------------+
| {"exported_on":1478002781679,"version":"009"}  |
+------------------------------------------------+
1 row selected (0.323 seconds)
0: jdbc:drill:zk=local>

Why's that matter? Because it means that we can continue to select elements from within it.

We could continue to nest the queries, but it gets messy to read, and complex to debug any issues. Let's take this meta element as a base one from which we want to query, and define it as a VIEW:

0: jdbc:drill:zk=local> create or replace view dfs.tmp.ghost_meta as with ghost as (select flatten(db) as db from `/Users/rmoff/Downloads/rittman-mead.ghost.2016-11-01.json`) select ghost.db.meta as meta from ghost;
+-------+-------------------------------------------------------------+
|  ok   |                           summary                           |
+-------+-------------------------------------------------------------+
| true  | View 'ghost_meta' created successfully in 'dfs.tmp' schema  |
+-------+-------------------------------------------------------------+
1 row selected (0.123 seconds)

Now we can select from the view:

0: jdbc:drill:zk=local> select m.meta.exported_on as exported_on, m.meta.version as version from dfs.tmp.ghost_meta m;
+----------------+----------+
|  exported_on   | version  |
+----------------+----------+
| 1478002781679  | 009      |
+----------------+----------+
1 row selected (0.337 seconds)

Remember that when you're selected nested elements you must alias the object that you're selecting from. If you don't, then Drill assumes that the first element in the column name (for example, meta.exported_on) is the table name (meta), and you'll get an error:

Error: VALIDATION ERROR: From line 1, column 8 to line 1, column 11: Table 'meta' not found

So having understood how to isolate and query the meta element in the JSON, let's progress onto what we're actually after - the name of the author of each post, and associated 'slug'.

Using jq again we can see the structure of the JSON file, with the code taken from here:

> jq 'path(..)|[.[]|tostring]|join("/")' rittman-mead.ghost.2016-11-01.json |grep --color=never post|more
"db/0/data/posts"
"db/0/data/posts/0"
"db/0/data/posts/0/id"
"db/0/data/posts/0/uuid"
"db/0/data/posts/0/title"
[...]

So Posts data is under the data.posts element, and from manually poking around we can see that user data is under data.users element.

Back to Drill, we'll create views based on the same pattern as we used for meta above; flattening the array and naming the column:

use dfs.tmp;
create or replace view ghost_posts as select flatten(ghost.db.data.posts) post from ghost;
create or replace view ghost_users as select flatten(ghost.db.data.users) `user` from ghost;

The ghost view is the one created above, in the dfs.tmp schema. With these two views created, we can select values from each:

0: jdbc:drill:zk=local> select u.`user`.id,u.`user`.name from ghost_users u where u.`user`.name = 'Robin Moffatt';
+---------+----------------+
| EXPR$0  |     EXPR$1     |
+---------+----------------+
| 15      | Robin Moffatt  |
+---------+----------------+
1 row selected (0.37 seconds)

0: jdbc:drill:zk=local> select p.post.title,p.post.slug,p.post.author_id from ghost_posts p where p.post.title like '%Drill';
+----------------------------------+----------------------------------+---------+
|              EXPR$0              |              EXPR$1              | EXPR$2  |
+----------------------------------+----------------------------------+---------+
| An Introduction to Apache Drill  | an-introduction-to-apache-drill  | 15      |
+----------------------------------+----------------------------------+---------+
1 row selected (0.385 seconds)

and join them:

0: jdbc:drill:zk=local> select p.post.slug as post_slug,u.`user`.name as author from ghost_posts p inner join ghost_users u on p.post.author_id = u.`user`.id where u.`user`.name like 'Robin%' and p.post.status='published' order by p.post.created_at desc limit 5;
+------------------------------------------------------------------------------------+----------------+
|                                     post_slug                                      |     author     |
+------------------------------------------------------------------------------------+----------------+
| connecting-oracle-data-visualization-desktop-to-google-analytics-and-google-drive  | Robin Moffatt  |
| obiee-and-odi-security-updates-october-2016                                        | Robin Moffatt  |
| otn-appreciation-day-obiees-bi-server                                              | Robin Moffatt  |
| poug                                                                               | Robin Moffatt  |
| all-you-ever-wanted-to-know-about-obiee-performance-but-were-too-afraid-to-ask     | Robin Moffatt  |
+------------------------------------------------------------------------------------+----------------+
5 rows selected (1.06 seconds)

This is pretty cool. From a 32MB single-row JSON file:

to being able to query it with standard SQL like this:

all with a single tool that can run on a laptop or desktop, and supports ODBC and JDBC for use with your favourite BI tools. For data exploration and understanding new datasets, Apache Drill really does rock!

Categories: BI & Warehousing

SQL Server 2016 – R Services Tips: How to find installed packages using T-SQL?

Yann Neuhaus - Tue, 2016-11-01 09:14

If you have restricted access to the server and you do not know if your packages are installed on the SQL Server with the R Services, you have the possibility to do it by T-SQL.
The R command/function to use is “installed.packages()”.
As you can read in the R Documentation for installed.packages(),  this function scans the description of each package.
The output is a table with 16 columns with basically these information:

  • Package
  • LibPath
  • Version
  • Priority Depends
  • Imports
  • LinkingTo
  • Suggests
  • Enhances
  • OS_type
  • License
  • Built

To understand, I propose an example with this function installed.packages() with a result writes in a table of 16 columns:

EXECUTE sp_execute_external_script @language = N'R',
@script=N'x <- data.frame(installed.packages()) 
			OutputDataSet <- x[,c(1:16)]'

R_Services_Installed_packages01

Just for your information, if you change the number of columns to 17, you get the following error message:
Msg 39004, Level 16, State 20

R_Services_Installed_packages02

Just to find the information that I need, I create a temporary table with the package name, the path and the version. These information are in the 3 first columns:

CREATE TABLE #packages_list
(
[Package] sysname
,[Package_Path] sysname
,[Version] NVARCHAR(20)
)
INSERT INTO #packages_list
EXECUTE sp_execute_external_script @language = N'R' ,
@script=N'x <- data.frame(installed.packages())
OutputDataSet <- x[,c(1:3)]'

SELECT COUNT(*) as NumberOfPackages FROM #packages_list

SELECT * FROM #packages_list

R_Services_Installed_packages03

As you can see, 47 packages are installed by default with the R Services.
I hope that my little tips will help you to begin with the R language in SQL Server ;-)

 

Cet article SQL Server 2016 – R Services Tips: How to find installed packages using T-SQL? est apparu en premier sur Blog dbi services.

Connecting Oracle Data Visualization Desktop to Google Analytics and Google Drive

Rittman Mead Consulting - Tue, 2016-11-01 05:42

To use Data Visualisation Desktop (DVD) with data from Google Analytics or Google Drive, you need to set up the necessary credentials on Google so that DVD can connect to it. You can see a YouTube of this process on this blog here.

Before starting, you need a piece of information from Oracle DVD that will be provided to Google during the setup. From DVD, create a new connection of type Google Analytics, and make a note of the the provided redirect URL:

Once you have this URL, you can go and set up the necessary configuration in Google. To do this, go to https://console.developers.google.com/ and sign in with the same Google credentials as have acces to Google Analytics.

Then go to https://console.developers.google.com/iam-admin/projects and click on Create Project

Having created the project, we now need to make available the necessary APIs to it, after which we will create the credentials. Go to https://console.developers.google.com/apis/ and click on Analytics API

On the next screen, click Enable, which adds this API to the project.

If you want, at this point you can return to the API list and also add the Google Drive API by selecting and then Enabling it.

Now we will create the credentials required. Click on Credentials, and then on OAuth consent screen. Fill out the Product name field.

Click on Save, and then on the next page click on Create credentials and from the dropdown list OAuth client ID

Set the Application type to Web Application, give it a name, and then copy the URL given in the DVD New Connection window into the Authorised redirect URIs field.

Click Create, and then make a note of the provided client ID and client secret. Watch out for any spaces before or after the values (h/t @Nephentur). Keep these credentials safe as you would any password.

Go back to DVD and paste these credentials into the Create Connection screen, and click Authorise. When prompted, sign in to your Google Account.

Click on Save, and your connection is now created successfully!

With a connection to Google Analytics created, you can now analyse the data available from within it. You'll need to set the measure columns appropriately, as by default they're all taken by DVD to be dimensions.

Categories: BI & Warehousing

Debian dist-upgrade: ipw2200 firmwares missing...

Dietrich Schroff - Tue, 2016-11-01 05:25
After the dist-upgrade the ipw2200 wireless chipset drivers are missing.
Grrrr...
No more internet access - so i had to use a good old LAN cable ;-)

The fix was very easy:
apt-get install firmware-ipw22x00
rmmod ipw2200
modprobe ipw2200 and the wireless network is up again...

Debian dist-upgrade 5 (lenny) to 6 (squeeze): insserv: exiting now

Dietrich Schroff - Tue, 2016-11-01 04:02
After several years i decided to upgrade my old laptop to the current debian version.
The first
apt-get dist-upgrade ran into the following problem:
insserv: warning: script 'K01hotplug-net' missing LSB tags and overrides
insserv: warning: script 'K01x2goserver' missing LSB tags and overrides
insserv: warning: script 'K01oracle-xe' missing LSB tags and overrides
insserv: warning: script 'S85vpnagentd_init' missing LSB tags and overrides
insserv: warning: script 'S02vpnclient_init' missing LSB tags and overrides
insserv: warning: script 'S15initrd-tools.sh' missing LSB tags and overrides
insserv: warning: script 'S15hotplug' missing LSB tags and overrides
insserv: warning: script 'S15modutils' missing LSB tags and overrides
insserv: warning: script 'modutils' missing LSB tags and overrides
insserv: warning: script 'hotplug' missing LSB tags and overrides
insserv: warning: script 'initrd-tools.sh' missing LSB tags and overrides
insserv: warning: script 'hotplug-net' missing LSB tags and overrides
insserv: warning: script 'vpnclient_init' missing LSB tags and overrides
insserv: warning: script 'x2goserver' missing LSB tags and overrides
insserv: warning: script 'oracle-xe' missing LSB tags and overrides
insserv: warning: script 'vpnagentd_init' missing LSB tags and overrides
insserv: Starting vpnagentd_init depends on rc.local and therefore on system facility `$all' which can not be true!
insserv: Starting vpnagentd_init depends on rc.local and therefore on system facility `$all' which can not be true!
insserv: Starting vpnagentd_init depends on rc.local and therefore on system facility `$all' which can not be true!
insserv: Starting vpnagentd_init depends on rc.local and therefore on system facility `$all' which can not be true!
insserv: Starting vpnagentd_init depends on rc.local and therefore on system facility `$all' which can not be true!
insserv: Starting vpnagentd_init depends on rc.local and therefore on system facility `$all' which can not be true!
insserv: Starting vpnagentd_init depends on rc.local and therefore on system facility `$all' which can not be true!
insserv: Starting vpnagentd_init depends on rc.local and therefore on system facility `$all' which can not be true!
insserv: Starting vpnagentd_init depends on rc.local and therefore on system facility `$all' which can not be true!
insserv: Starting vpnagentd_init depends on rc.local and therefore on system facility `$all' which can not be true!
insserv: Starting vpnagentd_init depends on rc.local and therefore on system facility `$all' which can not be true!
insserv: Starting vpnagentd_init depends on rc.local and therefore on system facility `$all' which can not be true!
insserv: Starting vpnagentd_init depends on rc.local and therefore on system facility `$all' which can not be true!
insserv: There is a loop between service vpnagentd_init and rc.local if started
insserv:  loop involving service rc.local at depth 23
insserv:  loop involving service vpnagentd_init at depth 1
insserv: exiting now without changing boot order!
update-rc.d: error: insserv rejected the script header
dpkg: Fehler beim Bearbeiten von /var/cache/apt/archives/util-linux_2.20.1-5.3_i386.deb (--unpack):
 Unterprozess neues pre-installation-Skript gab den Fehlerwert 1 zurück
Fehler traten auf beim Bearbeiten von:
 /var/cache/apt/archives/util-linux_2.20.1-5.3_i386.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)Hmmm. A further try with
apt-get dist-upgrade -f  failed with the same error. What was wrong?
insserv: Starting vpnagentd_init depends on rc.local and therefore on system facility `$all' which can not be true! I just searched for "vpnagentd_init" in /etc und found it in /etc/init.d. Quick workaround: I moved the vpnagentd_init in a backup directory and after that the upgrade worked without any problem...

LISTAGG

Tom Kyte - Tue, 2016-11-01 03:06
Hello, I need to create a list of distinct values. Here's my test case: create table t(id number,dt date,txt varchar2(1)); alter table t add constraint t_pk primary key(id); create sequence t_seq; insert into t values(t_seq.nextval,trunc(sysdat...
Categories: DBA Blogs

Impact on DB performance with enabling SQL Plan Baseline

Tom Kyte - Tue, 2016-11-01 03:06
Hi Tom, Our database version is 12.1.0.2, running on exadata platform. For stabilizing the performance, we are planning to enable SQL Plan baselines in our database. By default, the parameter "optimizer_capture_sql_plan_baselines" is F...
Categories: DBA Blogs

Database context needs to auto create every time the DB startup

Tom Kyte - Tue, 2016-11-01 03:06
I'm using 11.2.0.3 DB. I created a database context using the sql below. This works fine but if I restart the DB this context does not exists any more. I have to logon and run this command again. I'm not sure if this is intended functiona...
Categories: DBA Blogs

Find user who performed DDL

Tom Kyte - Tue, 2016-11-01 03:06
Hi, I need your help to find the name of the user who performed DDL on a specific table eg. table emp. How can i find this ? Note ,audit is disabled in my db. os:rhel 6 db:11.2.0.3
Categories: DBA Blogs

Which Oracle Database Patchsets Can Be Used With EBS?

Steven Chan - Tue, 2016-11-01 02:05

Oracle versions numbers can be very long.  Complicating matters further, different product families within Oracle use the version hierarchy differently.  This can make it confusing to determine whether a specific product update has been certified with Oracle E-Business Suite.

Oracle Database update numbers can be daunting.  However, it might be reassuring to learn that the fifth-level digit in Oracle Database version is not relevant to E-Business Suite certifications.  In other words, you can apply any database patchsets that have been certified at the fourth-level digit. 

For example, we have certified EBS 12.2 with Database 12.1.0.2. All of the following Oracle Database patchsets will work with EBS 12.2:

  • Oracle Database 12.1.0.2.0 (documented certification)
  • Oracle Database 12.1.0.2.160419 (April 2016 Database Proactive Bundle Patch)
  • Oracle Database 12.1.0.2.160719 (July 2016 Database Proactive Bundle Patch)

This is sometimes shown in our documentation in either of the following ways:

  • Oracle Database 12.1.0.2
  • Oracle Database 12.1.0.2.x

The absence of a fifth-digit or the presence of an 'X' in the fifth digit's place means any fifth-digit level updates may be applied to an Oracle Database for EBS 12.2 without requiring a new certification.  This applies to all environments, including test and production environments.

Related Articles

Categories: APPS Blogs

Useful Guidelines for Designing and Developing in Fluid

PeopleSoft Technology Blog - Mon, 2016-10-31 19:45

So you've started adopting PeopleSoft's Fluid User Interface to provide a better experience and mobility for your user communities.  Great!  For those of you that are taking an extra step and are doing some of your own development in Fluid, we offer some guidelines to help with your development efforts.  These docs will help you create home pages, components, and navigation flows that are consistent with those delivered by PeopleSoft's application development teams. 

There is a compendium article on My Oracle Support that provides links to several useful guides:  FLUID UI: PeopleSoft Fluid User Interface Supplemental Documentation (Doc ID 1909955.1)

Here are some of the guides you'll find on this page:

  • Cascading Style Sheet Guide for PeopleSoft Fluid:  Contains descriptions of delivered CSS styles. Using this information will be helpful for creating custom fluid applications as well as extending current CSS features delivered in your PeopleSoft applications.
  • Pivot Grid Security:  Provides information about security for Real-time Component Search in the PeopleSoft Fluid User Interface mode.
  • PeopleSoft Fluid User Interface Programming Fundamentals:  Covers advanced topics related to creating fluid applications.
  • Converting Classic Components to PeopleSoft Fluid User Interface:  Provides descriptions of the steps involved in a sample scenario of converting a classic page to a fluid page, helping to illustrate the concepts of fluid development.
  • Fluid User Interface and Navigation Standards:  A set of guidelines and standards for applying the recommended techniques of fluid application development.

In addition, there is a great Fluid User Interface Design Standards document available here.

If you want to insure that your applications perform optimally, check out this red paper on performance for Fluid.

Of course there is a lot of good information in PeopleBooks too, but these resources go further.


So Long ACED

Jonathan Lewis - Mon, 2016-10-31 14:53

… and thanks for all the fish.

Today I removed myself from the OTN ACE program. This isn’t a reflection on the anything to do with the ACE program – quite the reverse, in fact – it’s because they’re introducing steps to ensure that the ACE Directors can justify their titles. Unfortunately, as anyone who has gone through (e.g.) ISO 9001 certification can tell you, quality assurance tends to translate into paperwork and ticking boxes – and while I can always find time to write up some interesting feature of Oracle I really find it hard to prioritise time for filling in forms.

In the last 4 months I’ve failed to file my monthly list of relevant activities twice, failed to request funding for two of the international conferences I’ve spoken at, and failed to submit claims against the two for which I had requested and received funding approval – so there really was no hope of me being motivated to collect all the extra details that the new regime requires.

So, best wishes to the ACE program – I’m still happy to do what I’ve been doing for the last 25+ years, and I’ll still be meeting up with ACEDs, but I’ll just be wearing one label less as I do it.


SQL Server 2016: New useful function STRING_SPLIT()

Yann Neuhaus - Mon, 2016-10-31 11:01

Now, in the latest version of SQL Server, you have one of the most expected function as well as for developers as for administrators, splitting a string natively in T-SQL:

STRING_SPLIT(<character expression>,<separator>)

This function has 2 parameters:

  • The character expression with a data type of nvarchar,varchar,nchar or char
  • The separator with a data type of nvarchar(1), varchar(1), nchar(1) or char(1)

The function return a table of one column with all splitting string

A first test with a text of mine on the dbi-services website and the ‘.’ as separator:
string_split01

If I use more than one character for the separator like ‘. ’, an error message appears:
Msg 214, Level 16, State 11, Line 1
Procedure expects parameter ‘separator’ of type ‘nchar(1)/nvarchar(1)’.

string_split02

This function is very useful if you have a list like the countries beginning with A:
string_split03
Another usage is for a folder path:
string_split04
This is a nice, easy and useful function but limited to one character as separator for the moment…
Msdn link to the function here.

 

Cet article SQL Server 2016: New useful function STRING_SPLIT() est apparu en premier sur Blog dbi services.

Introducing HelloSign for Oracle Documents Cloud Service

WebCenter Team - Mon, 2016-10-31 08:56

Authored by: Sarah Gabot, Demand Generation Manager, HelloSign 

A few months ago, HelloSign was excited to announce our solution for Oracle Documents Cloud Service. It’s now possible for Oracle Documents Cloud Service customers to easily upload, populate, send, and track important business documents. 

The most exciting part of this partnership is that the possibilities are endless. Paper workflows exist everywhere; they’re in every industry and almost every department. This integration will help any company eliminate costly and redundant paperwork tasks.

How it Works
Open Oracle Documents Cloud Service and select the document you’d like to request a signature from and select it. 

In the menu bar, select “Gather Approval Signatures.” A HelloSign iframe will appear prompting you to indicate who you’d like to sign the document. Type in the names and email addresses of the signers. 


After you set up your signers, you can drag the text fields you want to use onto the document. You can choose a sign date (which will be automatically populated when the document is signed), checkbox, textbox, initials, or signature box. 

Quick tip: If you want to use data validation, choose a textbox field and select the type of validation. This will help you prevent incorrectly formatted from being submitted to you. For example, if you want to collect a zip code, it would give an error if a letter is entered instead of a number. 


Once you’ve added all the appropriate fields onto your document, click “Continue.” You’ll be prompted with a message box, allowing you to customize a message that’ll be sent to the designated signer. 

Your signer will receive an email from HelloSign, and he or she will be able to sign the document in a few clicks. When your document is signed, click “Get Signed File” in the menu to retrieve the signed document. And just like that, you’ve successfully requested for a document to be signed without printing a thing. 

Moving to a Paperless Future
When you use HelloSign for Oracle Documents Cloud Service, you’re taking one step closer to a paperless future. Now Oracle users can access their important documents and agreements without having to print or scan anything. 

Because signees will spend less time fussing with paper, you’ll be able to get your contracts and documents returned faster. When customers use HelloSign, we’ve seen companies increase their sales conversion rates by up to 23 percent simply by implementing eSignatures! 

Want to Learn More? 
Want to see our integration in action? Check out our video below. 

To learn how to get started, email our sales team at oracle-sales@hellosign.com or speak to your Oracle Account rep. 

Meta data about Users

Tom Kyte - Mon, 2016-10-31 08:46
Hi tom, I would like to know the table name which contains the users details created in database. example user_objects meta data contain objects created by the user. thank you.
Categories: DBA Blogs

How to identify operational days between sets of dates and also non-operational periods (gaps)

Tom Kyte - Mon, 2016-10-31 08:46
I have a simple table with following data: (PK, StartDate, EndDate) for which I want to know what are the periods of operation and what are the non-operational periods. For example: StartDate EndDate 15-Apr-16 29-Apr-16 29-Apr-16 30-Apr-16 ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator