Other

Accelerated Mobile App Development with Oracle Mobile Cloud Service – Part 2

Part 2: Mobile App Development with MCS

In Part 1, we explored using Oracle’s Mobile Cloud Service as tool to provide the backend services needed to retrieve data from an Oracle EBS Pricing and Availability form. With our mobile backend and custom API’s created, the actual app development can now begin!

 

Creating a Native App

For the next step of my POC, I’ll be creating a simple iOS application that allows me to lookup a Pricing and Availability item and view its properties and warehouse locations. Before firing up Xcode, I need to click the “SDK Downloads” from the MCS Applications page and download the iOS SDK.

 

As I mentioned earlier, the SDK will allow me to make MCS API calls with one or two lines of code. In order to take advantage of it I need to add the static libraries and header files from the downloaded SDK to my project in Xcode and properly link them.

 

The SDK download also includes an “OMC.plist” file that will need to be added to my project. The OMC.plist will hold the settings that are needed to connect to our Pricing and Availability mobile backend. The Settings tab of the Pricing and Availability backend in MCS has several of the items we need. Since we’re using basic authentication, we need to get the Mobile Backend ID, Anonymous Key, and the Base URL of our MCS environment.

 

We’ll also need the application key we were provided earlier after creating our mobile client. After adding these items to the OCM.plist, the initial MCS setup of our project is complete.

 

When a user runs my app, the first thing they will need to do is login. I put together a basic login screen with username and password fields.

 

When a user taps the Login button I simply need to authenticate against my mobile backend with these three lines of code:

 

If no error is returned then the authentication is successful and I can dismiss my login screen. My user will then be presented with my search screen which simply contains a table view with a search bar at the top.

This is the point where we utilize the Pricing and Availability custom API that we previously configured. When a user enters a Pricing and Availability Item’s ID and taps the search button I’ll need to make a GET call to the /pricingandavailabilityitem/{id} endpoint in order to return the matching item. Once again, this can be handled with a few lines of code:

 

The response is then parsed and a result row is added to my table view.

 

Tapping on the result row will bring the user to my item details screen where the Pricing and Availability item’s properties are displayed. I also want to display the warehouse locations for my item on the details screen so I make a similar second call to the pricingandavailabilityitem/{id}/pricingandavailabilityitemlocation endpoint and populate the results in another table view.

 

At this point I have successfully achieved the goal of displaying EBS form information on a mobile device! As you can see, the amount of effort required to authenticate and retrieve the data was minimal, whereas without MCS those tasks would have consumed a large percentage of my time.

 

Creating a MAX App

While the iOS SDK may have made my app development seem fairly effortless, MCS actually provides an easier way for me to achieve my goal. On the MCS Applications page, there is a Mobile Apps section that takes you to the Mobile Application Accelerator (MAX) application.

 

With the MAX application, it is possible to quickly put together a mobile app with absolutely no coding involved. With its drag-and-drop web interface, non-technical business users can easily login and build their own mobile apps in minutes.

Let’s take a look at building the same POC as a MAX application. Clicking the “New Application” button will take you through a simple app creation wizard.

 

After providing your app name and choosing your screen layout you will be presented with a blank home screen where you can drag and drop UI elements onto various content areas. Just like the native app, my MAX app will first present the user with a search screen that will display Pricing and Availability Item search results. To handle this, I’ll be adding a list element onto my home screen and enabling its search option which will automatically add a search field to the top of it.

 

Next we’ll need to indicate what data will be populated in our list element. Clicking “Add Data” will allow you to map any UI element to a data source. Choosing a data source is as simple as selecting the Pricing and Availability Item resource from our custom API. Our MAX app will automatically use the appropriate API calls to retrieve our data. We can then drag and drop properties from our Pricing and Availability resource onto each of the four available search result row labels to be displayed. I chose to use the Item Description, Item Type, Unit Pricing, and Pricing currency fields.

 

Since our Pricing and Availability Item API call requires an ID parameter we indicate that the list element’s search field will be the source.

 

Our search page now has what it needs to lookup a Pricing and Availability Item.

 

In order to see the details of a Pricing and Availability item, we will need to provide an action on the list element’s action tab. After clicking the Actions tab, another drag and drop interface allows us to indicate that when a list item is tapped, we will be taken to a new Pricing and Availability item detail screen.

 

In addition to displaying the Pricing and Availability item properties, I also want the new Pricing Item Detail page to display the warehouse locations. To handle this, when creating my details screen I choose the “Screen with Top Tabs and Summary” page template and specify three separate tabs: Overview, Quantities, and Warehouses. For each of the tabs, I follow the same process of dragging UI elements onto the content areas and mapping a data source to them. My Overview tab gets a form UI element that displays my Pricing and Availability Item’s properties. The Warehouses tab gets a list element that displays a list of all warehouse locations for the pricing item.

For the Quantities tab, I wanted to demonstrate a nice feature of MCS with the use of a bar chart to easily view the item quantities at each warehouse. I simply drag a bar chart UI element onto the tab and map the data source to my Pricing and Availability Item Locations resource with the warehouses along the X-Axis and the quantities along the Y-axis.

 

With our app complete, testing it out is as easy as hitting the test button. An iOS or Android simulator will run right in your browser.

 

Testing on or publishing to a mobile device isn’t that much more complicated. Once you install Oracle’s Mobile Application Accelerator client app on your device, you can easily add your MAX apps as “apps within an app” via a QR code. Avoiding time consuming app publishing processes means business users can get the tools they need with a few clicks.

 

Compared to native app development, the MAX app was created in a fraction of the time, and as you can see, no coding was involved. As easy as it was to build my POC, MAX has its limitations. Screens can be easily setup to search, view, add, edit, and delete business objects, but beyond that, you might need to get creative. Developing the right custom API for my Pricing and Availability app could make it possible to submit an item purchase, but the overall user experience will be limited. For more flexibility, native and hybrid apps will still have their place.

 

Conclusion

Overall, my POC just scratches the surface of what MCS can do. With the platform API’s providing database & content storage capabilities, push notifications, offline syncing, and built-in analytics, most of the things mobile apps require are readily available without having to worry about backend hardware and software. Having the ability quickly to assemble these platform API calls into custom API’s that can be reused across many mobile backends means that MCS has the potential to easily bring many aspects of a business to mobile devices.

By utilizing the MCS SDK’s, many of the common tasks of mobile app development that had previously been significant technical hurdles now become minor steps handled with a few lines of code. Considering the amount of effort that some of these common tasks required in my previous mobile projects, I believe MCS could have cut my development time in half. Realistically, organizations could have a mobile app in production use within a matter of hours. Being able to realize such quick time to value with a mobile app is definitely a key value proposition of MCS, so if that is important to your organization I recommend you give MCS a try.

The post Accelerated Mobile App Development with Oracle Mobile Cloud Service – Part 2 appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Accelerated Mobile App Development with Oracle Mobile Cloud Service – Part 1

Part 1: MCS Mobile Backends

The past decade has seen a steady rise in the use of mobile applications across nearly all industries. At Fishbowl Solutions, we have played a part in this trend by developing a series of Android and iOS apps that allow users to easily access their Oracle WebCenter content from their phones and tablets.

In my experience with mobile app development, I have found that a vast majority of development effort is spent on the same common tasks:

  • Writing code involving interaction with the backend server such as authentication & authorization
  • Retrieving & storing data
  • Synching content locally to devices for offline use
  • Sending push notifications across multiple device platforms.

These tasks always take a lot more effort than expected and tend to feel like I am reinventing the wheel. Even after the initial development, periodic changes to the backend server often require updated versions of the apps to be pushed through time-consuming publishing processes.

Time spent on these basic tasks is time that could be better spent adding additional features and creating a better user experience. Because of this, I was excited to learn that Oracle’s Mobile Cloud Service could be the solution to these problems.

What is Oracle Mobile Cloud Service?

Oracle Mobile Cloud Service (MCS) is a cloud service that provides a set of tools to support enterprise-wide mobile development. It allows quick creation and deployment of the back-end services your apps require without the initial hardware and software setup. With a small amount of configuration, any of these back-end services can be made available to your apps via REST API calls.

To handle features required by most apps, MCS includes the following built-in “Platform API’s”:

  • Authentication & Authorization
  • Database Storage
  • Content Storage
  • Push Notifications
  • Analytics
  • Offline Data & Synching
  • Mobile User Management

Having to configure all of these features on your own server would be a daunting task and likely set you back days or even weeks, but MCS provides the capabilities out of the box within minutes.

In addition to these platform API’s, MCS allows custom API’s to be quickly developed in NodeJS to create additional back-end services. With a few lines of JavaScript, additional calls to any of the Platform API’s or external services can be made allowing you to provide the exact functionality required by your mobile apps.

After your API’s are configured, MCS provides downloadable SDK’s for Android, iOS, Cordova (JavaScript), and Windows. Embedding these SDK’s in your code allows MCS API’s to be called with one or two lines of code compared to the many lines of code that would be required to make the API call manually.

MCS Mobile Backend Setup

I decided to try out MCS by creating a small proof-of-concept or POC. One of the problems Fishbowl customers face is accessing forms from Oracle E-Business Suite (EBS) on mobile devices, so I decided that the end goal for my POC would be to view the Pricing and Availability form in EBS on a mobile device.

 

Here is a simplistic view of this business challenge:

 

As a legacy application, EBS has no API of its own. To get around this we decided to enlist the help of a tool called AuraPlayer. AuraPlayer has the ability provide web services that allow us to externally interact with EBS forms. I’m not going to cover the AuraPlayer details in this blog, but the important thing to know is that after setting up the AuraPlayer services, I can now make a request to <AuraPlayerBaseURL>/PricingAndAvailability_queryByLabel?Item=AS18947 and receive a JSON response containing a list of my AS18947 pricing and availability item’s form fields from EBS along with a list of warehouse locations where the item is in stock.

Once the AuraPlayer services to EBS are configured my Pricing and Availability data is one step closer to reaching my mobile app:

 

At this point we’re now ready to setup MCS in order to fill the gap in the process. Our final configuration will look like this:

 

Any mobile application connecting to MCS will first and foremost require a Mobile Backend. Mobile Backends are MCS objects that group together a specific set of API’s along with the client applications and the set of users who will utilize them. In this scenario, I need to create a “Pricing and Availability” mobile backend that exposes a custom “Pricing and Availability” API to my mobile app. Since my Pricing and Availability API needs to make calls to the AuraPlayer services which are outside of MCS, I will also need to create a Connector. Connectors are MCS objects that provide access to external REST and SOAP API’s. Next, in order for my mobile application to access the mobile backend I will need to register my mobile application by setting up a Mobile Application Client. The last item needed is a test user who will have access to the mobile backend and the API. To summarize – labeled above:

  1. Create a mobile backend for “Pricing and Availability”
  2. Create a Pricing and Availability custom API
  3. Create AuraPlayer connector
  4. Register my app by setting up Mobile Application Client
  5. Set up a test user

Let’s now walk through the setup process in MCS.

 

After logging into the MCS interface, we first need to click on “Mobile Backends” and create a new mobile backend called “PricingAndAvailabilityBackend”.

 

With our new mobile backend created, the first thing we need to do is create at least one user who can access the backend. This can be done by clicking on the Users tab. In MCS, all mobile backends are associated with one User Realm. User Realms are sets of users that can either be managed directly in MCS or configured to connect to your company’s SSO. In our case, we will just create a new user called “testuser” under the default realm. Now that we have our test user, we can create our new Pricing and Availability custom API. When clicking on the API’s tab we see the message indicating that we don’t have any API’s selected, but before we create one we first need to create our AuraPlayer connector.

 

A new connector can be created by going to Applications > Connectors and clicking “New Connector”. In the connector setup wizard, I named it “AuraPlayerConnector” and provided the base service URL where the AuraPlayer REST services are accessed.

 

The Rules page of the Connector wizard allows any default parameters to be specified. Since all of my service calls to AuraPlayer have several required parameters I added them here.

 

The last step in the Connector wizard allows the connector to be tested. I provided the /PricingAndAvailability_queryByLabel?Item=AS18947 service URL I mentioned earlier that should return a pricing & availability item from EBS.

 

Since a connector must run under a mobile backend as a specific user, I select my backend and enter my test user’s credentials. I then click “Test Endpoint” and after receiving my expected JSON response I conclude that my AuraPlayer connector is configured correctly!

 

Our next task is to create the custom Pricing and Availability API that will utilize the newly created AuraPlayer connector. Going back to the mobile backend’s API tab we can now click the “New API” button. After providing the name of the API, the first thing to do is specify our available endpoints via the Endpoints tab. Clicking “New Resource” lets you add an Endpoint. I initially add two endpoints. One returns a collection of all pricing and availability items with a resource path of:

/pricingandavailabilityitem

The other returns a specific pricing and availability item with a resource path of:

pricingandavailabilityitem/{id}

where {id} is the item number in EBS.

Since my AuraPlayer services can also return a pricing and availability item’s warehouse locations, I decided to create two more endpoints underneath the pricingandavailabilityitem/{id} endpoint. This is done by clicking that endpoint’s “Add Nested Resource” icon. I create one endpoint that returns all pricing and availability item locations for a given item with a resource path of:

pricingandavailabilityitem/{id}/pricingandavailabilityitemlocation

I then create another endpoint that returns a specific pricing and availability item location for a given item with a resource path of:

pricingandavailabilityitem/{id}/pricingandavailabilityitemlocation /{pricingandavailabilityitemlocation_id}

 

For each endpoint created, I specify display names, descriptions, and available methods. For my initial POC, I’ll really only need GET methods.

 

With our endpoints defined, we now need to implement their behavior. MCS custom API’s are written in NodeJS using the ExpressJS framework. By clicking on the Pricing and Availability API’s Implementation tab, you can see a “JavaScript Scaffold” button which allows you to download a pre-built NodeJS project with each of your API’s endpoints already stubbed out for you.

 

After downloading the scaffold package the main file needing to be edited is the pricingavailabilityapi.js file.

 

In this file, each route will need to be implemented. Since my Pricing and Availability API is simply calling my AuraPlayer connector there won’t be a whole lot of work to be done. For my /pricingandavailabilityitem/{id} route, I basically need to do three things:

  1. Get the Pricing and Availability item’s “{id}” parameter from the request object.
  2. Use my connector to make a GET call to AuraPlayer specifying the “PricingAndAvailability_queryByLabel” resource and the id parameter.
  3. Extract the required elements from the AuraPlayerConnector results and return them in the API response.

 

Aside from building out each of my routes, the other important change is to add my API and connector dependencies in the package.json file.

 

Once that is taken care of, simply package up the files and upload them on the API’s Implementation tab.

 

With our Pricing and Availability API finished, our mobile backend is almost complete. As I mentioned earlier, in order for our mobile application to access the mobile backend we will need to register it on the PricingAndAvailabilityBackend Clients tab by clicking “New Client”.

 

A client is easily created by specifying the client name, platform, app version, and the bundle ID. Once the client is created you will be presented with an application key that will be needed when we build our app.

 

That’s basically it for our MCS setup. Within a few hours, my mobile app has what it needs to access the Pricing and Availability forms in EBS.

 

While MCS will prove to be valuable at quickly providing your backend services, it also provides the tools to save time on our front-end app development. In part 2, I will continue my POC by creating the mobile app that will access the newly created MCS mobile backend.

 

Next: Accelerated Mobile App Development with Oracle Mobile Cloud Service – Part 2

The post Accelerated Mobile App Development with Oracle Mobile Cloud Service – Part 1 appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Digital Transformation Instead of Technology Evolution: Cox Enterprises’ Digital Workplace Success with Oracle WebCenter Portal

WebCenter portal devicesIn 2014, Fishbowl Solutions engaged with Cox Enterprises to build its employee digital workplace. Prior to that engagement, Fishbowl delivered numerous employee, customer, and partner portals/intranets, but this was our first project where the word portal wasn’t being used to describe what Cox Enterprises would be building and delivering to its employees. Instead, the phrase “digital workplace” detailed how Cox envisioned a consumer-like digital experience that promoted collaboration, sparked innovation, and helped employees get their jobs done – regardless of time, space, or device.

Now neither the term nor concept of a digital workplace was new in 2014. Tech vendors and analysts had been discussing such a workplace environment since around 2008, but you may may remember it being called Enterprise 2.0. What stands out to me regarding Enterprise 2.0 was how much collaboration or social capabilities in the workplace became the focus. Such collaboration capabilities as instant messaging, blogs, wikis, and document sharing were thought to be the catalyst for more information sharing, which would lead to more innovation and better employee engagement. However, the place where all this collaboration was supposed to take place – the employee portal or intranet – did not offer the experience or performance that users needed to get work done. Furthermore, the technology and associated features really drove conversations and platform decisions, and not what users needed from the portal or how they wanted to work.

Contrast the above with how Cox Enterprises decided which portal platform they would use for their employee digital workplace. For them, this started with a focus on the workplace they wanted to provide to their employees. A workplace where employees could collaborate and access relevant information from one system – regardless of device. They invested time and money to learn as much about their eventual portal users (personas) before they decided on the technology with the associated features that could support employee work streams and how they would use the portal.

This focus on the user was part of much larger “digital transformation” initiative the company was undertaking. This initiative really centered on making sure Cox’s 50,000 employees, which are scattered across several divisions and geographic locations, were engaged and informed. To enable this, Cox leaders wanted to provide them with a similar experience to access company, department, and personal information. After doing this persona analysis and user flow mapping, they decided that Oracle WebCenter Portal would be the system for their employee digital workplace. They based their decision on WebCenter Portal’s tight integration with WebCenter Content, which was key for their overall digital transformation initiative to consolidate as much content within one system. They also needed a system that could handle 1,500+ concurrent users, and WebCenter’s underlying architecture, including WebLogic Server and Oracle Database, exceeded their performance metrics.

I encourage you to learn more about Cox’s digital transformation initiative by attending the webinar they are partnering with Fishbowl on next Thursday, September 14th. Come hear from Dave Longacre, one of Cox’s project managers for the digital workplace project, detail the vision, steps, and resulting benefits for Cox’s employee digital workplace. Please click on the link below to register. Also, check out our employee digital workplace page on our website for more resources.

Webinar – How Cox Enterprises Built a Digital Workplace for 50,000 Employees using Oracle WebCenter Portal

The post Digital Transformation Instead of Technology Evolution: Cox Enterprises’ Digital Workplace Success with Oracle WebCenter Portal appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Upcoming Webinar, “WebCenter Search that Works!” highlights Oracle WebCenter integration with Mindbreeze InSpire

Earlier this month, Fishbowl announced the release of our Oracle WebCenter Content Connector for Mindbreeze InSpire. The Connector enables the Mindbreeze enterprise search appliance to securely index and serve content stored in WebCenter Content. The Connector also allows customers to leverage the Mindbreeze Search App Designer to embed modern search apps directly in WebCenter Content.

As the quantity of unstructured information continues to expand, content management success depends on the ability to find data in a growing information flood. Without search that works, managed content becomes lost content. By integrating Oracle WebCenter with Mindbreeze InSpire you can improve information discovery, increase user adoption, and encourage content reuse through better search.

In our upcoming webinar, we will provide an overview of the Mindbreeze InSpire enterprise search appliance and our integrations with both WebCenter Content and Portal. We’ll cover what a typical implementation looks like and why customers are making the switch. We’ll also discuss the migration path off deprecated Oracle Secure Enterprise Search and Google Search Appliance technologies, and options for adding other sources like SharePoint and network shares.

We hope you’ll join us.

The post Upcoming Webinar, “WebCenter Search that Works!” highlights Oracle WebCenter integration with Mindbreeze InSpire appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Imanis Data

DBMS2 - Tue, 2017-08-22 07:46

I talked recently with the folks at Imanis Data. For starters:

  • The point of Imanis is to make copies of your databases, for purposes such as backup/restore, test/analysis, or compliance-driven archiving. (That’s in declining order of current customer activity.) Another use is migration via restoring to a different cluster than the one that created the data in the first place.
  • The data can come from NoSQL database managers, from Hadoop, or from Vertica. (Again, that’s in declining order.)
  • As you might imagine, Imanis makes incremental backups; the only full backup is the first one you do for that database.
  • “Imanis” is a new name; the previous name was “Talena”.

Also:

  • Imanis has ~35 subscription customers, a significant majority of which are in the Fortune 1000.
  • Customer industries, in roughly declining order, include:
    • Financial services other than insurance.
    • Insurance.
    • Retail.
    • “Technology”.
  • ~40% of Imanis customers are in the public cloud.
  • Imanis is focused on the North American market at this time.
  • Imanis has ~45 employees.
  • The Imanis product just hit Version 3.

Imanis correctly observes that there are multiple reasons you might want to recover from backup, including:

  • General disaster/system failure.
  • Bug in an application that writes data.
  • Malicious acts, including encryption-by-ransomware.

Imanis uses the phrase “point-in-time backup” to emphasize its flexibility in letting you choose your favorite time-version of your rolling backup.

Imanis also correctly draws the inference that the right backup strategy is some version of:

  • Make backups very frequently. This boils down to “Do a great job of making incremental backups (and restoring from them when necessary). This is where Imanis has spent the bulk of its technical effort to date.
  • In case recovery is needed, identify that last clean (or provably/confidently clean) version of the database and restore from that. The identification part boils down to letting the backup databases be queried directly. That’s largely a roadmap item.
    • Imanis has recently added the capability to build its own functionality querying the backup database.
    • JDBC/whatever general access is still in the future.

Note: When Imanis backups offer direct query access, the possibility will of course exist to use the backup data for general query processing. But while that kind of capability sounds great in theory, I’m not aware of it being a big deal (on technology stacks that already offer it) in practice.

The most technically notable other use cases Imanis mentioned are probably:

  • Data science dataset generation. Imanis lets you generate a partial copy of the database for analytic or test purposes.
    • You can project, select or sample your data, which suggests use of the current query capabilities.
    • There’s an API to let you mask Personally Identifiable Information by writing your own data transformations.
  • Archiving/tiering/ILM (Information Lifecycle Management). Imanis lets you divide data according to its hotness.

Imanis views its competition as:

  • Native utilities of the data stores.
  • Hand-coded scripts.
  • Datos.io, principally in the Cassandra market (so far).

Beyond those, the obvious comparison to Imanis is Delphix. I haven’t spoken with Delphix for a few years, but I believe that key differences between Delphix and Imanis start:

  • Delphix is focused on widely-installed RDBMS such as Oracle.
  • Delphix actually tries to have different production logical copies of your database run off of the same physical copy. Imanis, in contrast, offers technology to help you copy your databases quickly and effectively, but the copies you actually use will indeed be separate from each other.

Imanis software runs on its own cluster, based on hacked Hadoop. A lot of the hacking seems to related to a metadata store, which supports things like:

  • Understanding which (incrementally backed up) blocks need to be pulled together to make a specific copy of the database.
  • Putting data in different places for ILM/tiering.

Another piece of Imanis tech is machine-learning-based anomaly detection.

  • As incrementally backed-up blocks arrive, Imanis flags anomalous ones and states a reason for them.
  • A flag is given a reason.
  • You can denounce the flag as a false alert, and hopefully similar flags won’t be raised in the future.

The technology for this seems rather basic:

  • Random forests for the flagging.
  • No drilldown w/in the Imanis system for follow-up.

But in general concept this is something a lot more systems should be doing.

Most of the rest of Imanis’ tech story is straightforward — support various alternatives for computing platforms, offer the usual security choices, etc. One exception that was new to me was the use of erasure codes, which seem to be a generalization of the concept of parity bits. Allegedly, when used in a storage context these have the near-magical property of offering 4X replication safety with only a 1.5X expansion of data volume. I won’t claim to have understood the subject well enough to see how that could make sense, or what tradeoffs it would entail.

Categories: Other

More notes on the transition to the cloud

DBMS2 - Thu, 2017-08-17 04:11

Last year I posted observations about the transition to the cloud. Here are some further thoughts.

0. In case any doubt remained, the big questions about transitioning to the cloud are “When?” and “How?”. “Whether”, by way of contrast, is pretty much settled.

1. The answer to “When?” is generally “Over many years”. In particular, at most enterprises the cloud transition will span multiple CIO’s tenure in their positions.

Few enterprises will ever execute on simple, consistent, unchanging “cloud strategies”.

2. The SaaS (Software as a Service) vs. on-premises tradeoffs are being reargued, except that proponents now spell SaaS C-L-O-U-D. (Ali Ghodsi of Databricks made a particularly energetic version of that case in a recent meeting.)

3. In most countries (at least in the US and the rest of the West), the cloud vendors deemed to matter are Amazon, followed by Microsoft, followed by Google. And so, when it comes to the public cloud, Microsoft is much, much more enterprise-savvy than its key competitors.

4. In another non-technical competitive factor: Wal-Mart isn’t the only huge company that is hostile to the Amazon cloud because of competition with other Amazon businesses.

5. It was once thought that in many small countries around the world, there would be OpenStack-based “national champion” cloud winners, perhaps as subsidiaries of the leading telecom vendors. This doesn’t seem to be happening.

Even so, some of the larger managed-economy and/or generally authoritarian countries will have one or more “national champion” cloud winners each — surely China, presumably Russia, obviously Iran, and probably some others as well.

6. While OpenStack in general seems to have fizzled, S3 compatibility has momentum.

7. Finally, let’s return to our opening points: The cloud transition will happen, but it will take considerable time. A principal reason for slowness is that, as a general rule, apps aren’t migrated to platforms directly; rather, they get replaced by new apps on new platforms when the time is right for them to be phased out anyway.

However, there’s a codicil to those generalities — in some cases it’s easier to migrate to the new platform than in others. The hardest migration was probably when the rise of RDBMS, the shift from mainframes to UNIX and the switch to client/server all happened at once; just about nothing got ported from the old platforms to the new. Easier migrations included:

  • The switch from Unix to Linux. They were very similar.
  • The adoption of virtualization. A major purpose of the technology was to make migration easy.
  • The initial adoption of DBMS. Then-legacy apps relied on flat file systems, which DBMS often found easy to emulate.

The cloud transition is somewhere in the middle between those extremes. On the “easy” side:

  • Popular database management technologies and so on are available in the cloud just as they are on-premise.
  • Major app vendors are doing the hard work of cloud ports themselves.

Nonetheless, the public cloud is in many ways a whole new computing environment — and so for the most part, customer-built apps will prove too difficult to migrate. Hence my belief that overall migration to the cloud will be very incremental.

Categories: Other

Notes on data security

DBMS2 - Thu, 2017-08-10 04:15

1. In June I wrote about burgeoning interest in data security. I’d now like to add:

  • Even more than I previously thought, demand seems to be driven largely by issues of regulatory compliance.
  • In an exception to that general rule, many enterprise have vague mandates for data encryption.
  • In awkward contradiction to that general rule, there’s a general sense that it’s just security’s “turn” to be a differentiating feature, since various other “enterprise” needs are already being well-addressed.

We can reconcile these anecdata pretty well if we postulate that:

  • Enterprises generally agree that data security is an important need.
  • Exactly how they meet this need depends upon what regulators choose to require.

2. My current impressions of the legal privacy vs. surveillance tradeoffs are basically:

  • The freer non-English-speaking countries are more concerned about ensuring data privacy. In particular, the European Union’s upcoming GDPR (General Data Protection Regulation) seems like a massive addition to the compliance challenge.
  • The “Five Eyes” (US, UK, Canada, Australia, New Zealand) are more concerned about maintaining the efficacy of surveillance.
  • Authoritarian countries, of course, emphasize surveillance as well.

3. Multiple people have told me that security concerns include (data) lineage and (data) governance as well. I’m fairly OK with that conflation.

  • By citing “lineage” I think they’re referring to the point that if you don’t know where data came from, you don’t know if it’s trustworthy. This fits well with standard uses of the “data lineage” term.
  • By “data governance” they seem to mean policies and procedures to limit the chance of unauthorized or uncontrolled data change, or technology to support those policies. Calling that “data governance” is a bit of a stretch, but it’s not so ridiculous that we need to make a big fuss about it.

In other words: If your data transformation pipelines aren’t locked down, then your data isn’t locked down either.

4. But how seriously does that last point need to be taken? For starters, the possibility of erroneous calculations:

  • Is a strong threat to analytic accuracy, as has been recognized at least for the decades that “one version of the truth” has been a catchphrase.
  • Has some regulatory risk, e.g. in the United States around Sarbanes-Oxley.
  • Is not as a big a deal for the core security threat of data theft/exfiltration.

Further, it’s not too hard architecturally to have a divide between:

  • Data transformation for operational use cases, which may need to be locked down.
  • Data transformation for purely investigative analytics, which can be very fluid, for transformation technologies such as Hadoop, Spark and Excel alike.

Bottom line: Data transformation security is an accessible must-have in some use cases, but an impractical nice-to-have in others.

Categories: Other

A Fishbowl Success Story: The Benefits of Consolidating Disparate CAD Databases

A large medical device manufacturer wanted to fully integrate their R&D, engineering, and manufacturing organizations. This would allow a more efficient, capable and robust product development system that would help the flow of new, innovative products and never fall short on quality.

One key obstacle was the amount of data scattered across the globe in various PDM, PLM and network folders.  This data needed to be organized and consolidated into a unified system with quality processes that would achieve FDA certification.  This consolidation would enable individuals to access accurate data from any location at any time.  Just from a CAD data perspective, there were 100’s of thousands of Solidworks files across 7+ locations around the world in 4+ PDM/PLM systems plus random network file folders.

The company partnered with Fishbowl to migrate the Solidworks PDM, PLM, CAD systems into their single global Windchill PDMLink system.  A key criterion for them choosing Fishbowl was Fishbowl’s LinkExport and LinkLoader family of products.  LinkExport automates the data extraction from PDMWorks and Enterprise PDM and LinkLoader automates the bulk loading into Windchill.

THE PLAN

The migration plan was to have separate migrations for each location.  Each production migration would be able to be completed over a weekend to minimize business impact (e.g. users would check files into PDMWorks – or whatever – on Friday and then check them out of Windchill on Monday).  This approach spread out the work and lowered risk since each location also needed to comply with quality audits as part of their test and production migration passes.

RESULTS

Fishbowl successfully executed 7 migrations that consisted of 100,000+ files total.  60,000+ files came from five separate Enterprise PDM and PDMWorks systems and another 40,000+ files from network file folders.  All data was bulk loaded into a single Windchill PDMLink and each migration was completed over a weekend so minimal disruption occurred.  The project ROI was less than 6 months, and the increase efficiencies and innovation have resulted in huge corporate gains.

 

Contact     Rick Passolt for more information on LinkLoader and LinkExport
Webinar: Automate and Expedite PTC Windchill Bulk Loading

 

Date: August 17th, 2017

Time: 1:00-2:00pm CST

Speaker: Rick Passolt – Senior Account Executive

Register

The post A Fishbowl Success Story: The Benefits of Consolidating Disparate CAD Databases appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Global Financial Services Company Leverages Oracle WebCenter Content for Compound Document Management to Support Underwriting Memo Application

For this week’s case study, our featured customer, a global financial services company, came to Fishbowl looking to replace their current system they had for composing financial underwriting documents. The company’s existing system was 15 years old but had since been sold and left to languish among its customers. Additionally, as the tool had not been updated it was becoming vastly more difficult to use and maintain in a fast-paced environment. Then, our client looked into creating a custom underwriting memo application on Oracle WebCenter Content with Fishbowl.

Working together, our client and the Fishbowl Solutions product development team worked to build, test, and deploy a new solution to create a modern system with Oracle WebCenter Content. The collaboration between Fishbowl and our featured client proved its success as WebCenter’s content management capabilities and user interface elements reduced credit memo application processing time by 25%.

 

BUSINESS DRIVERS
  • Reduce underwriting process time to enable faster transactions
  • Replace inefficient and archaic system for composing financial underwriting documents
  • Integrate and assemble all content needed for underwriting process to users of current credit application software
  • Ensure content needed for underwriting memo application is securely managed yet highly available
SOLUTION SUMMARY
  • Fishbowl configured Oracle WebCenter Content to manage all content needed for underwriting memo application
  • Integrated Fishbowl’s Compound Document Assembly within company’s credit underwriting system
  • Underwriting memo presented as chapters which include risk factors, business description, operating risk, etc.
  • Compound Document Assembly collates documents and includes non-text elements such as spreadsheets
  • Users can check in/check out the documents and their sections directly from underwriting memo application
  • Users can edit a section of the underwriting memo while another user edits a different section
  • Document structures can be viewed as tabs allowing users to quickly and easily navigate from one report to another
  • Users receive notifications related to any work within system
  • All changes tracked within underwriting memo and versions stored in Oracle WebCenter
CUSTOMER BENEFITS
  • Content management capabilities and user interface elements reduced credit memo application processing time by 25%
  • Content publishing time greatly reduced providing quicker reviews and increased collaboration for underwriting team
  • Documents can be collated and printed for reporting purposes

The post Global Financial Services Company Leverages Oracle WebCenter Content for Compound Document Management to Support Underwriting Memo Application appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Upgrading to Oracle WebCenter Content or Portal 12c: If not now, when?

Fishbowl Solutions will be kicking off a webinar series starting next Thursday, August 3rd. Our first webinar topic will be “5 Key Reasons to Upgrade to Oracle WebCenter Content or Portal 12c”. Why did we pick this topic, and why is this topic relevant now? Those are both good questions, especially if you are a well-informed WebCenter customer and you know that 12c was released almost 2 years ago.

To answer those questions, please let me start by stating that Fishbowl Solutions has performed many WebCenter upgrades over the years. While each one may have been different in size and scope, we have seen some common reasons/themes emerge from what drove customers to start their upgrade when they did.

Why upgrade to WebCenter 12c Now?
  • Get Current with Support and Maintenance
    • Premier and Extended support for 10g customers has elapsed. Most of the customers we talk to know this, but they might not know that they can do an upgrade directly from 10g to 12c. When you consider that Premier support for WebCenter Content and Portal 11g elapses in December of 2018, it makes sense to go directly to 12c instead of 11g. You can review Oracle’s Support Policies for Fusion Middleware here.
  • Explore Cloud Options for Content Management
    • With the release of 12c, Oracle introduced ways to integrate and share content between Oracle WebCenter on premise and the Oracle Content and Experience Cloud. This provided an easy way for organizations to share and collaborate on documents. If your organization is still deciding on your roadmap for content management – on premise, hybrid, cloud first – 12c provides the capabilities to explore use cases for the cloud while maintaining your content on premise.
  • Content and System Consolidation
    • Some legacy WebCenter customers come to the realization that they have too many instances of the system in place, as well as disparate/duplicate content being managed. Instead of trying to audit each one of their individual systems and fix or change any metadata issues, security groups, etc., they decide that doing an upgrade rectifies a lot of these problems, and enables them to get rid of content no longer needing management or retention.
  • Growing List of Environment & Technology Dependencies
    • Perhaps your organization wants to move the latest version of Oracle Database, but you can’t because your legacy WebCenter system utilizes an older version. Unless you upgrade WebCenter, your organization as a whole may be impacted by not being able to utilize the newest version of associated or dependent technologies.
  • User Expectations – Better User Experience
    • WebCenter Content and Portal 12c provide a better user experience for users and administrators. Since organizations want everyone to experience these better interfaces, they start to consider who the actual users of the system are, and they build an experience designed for each of those user personas. So while the upgrade to 12c would have improved the overall experience, organizations use the upgrade to design the best experience possible to ensure widespread adoption and overall use.

We will discuss each of these in more detail during the webinar next Thursday. You can find more information and register for the webinar here.

We hope you can join us.

 

The post Upgrading to Oracle WebCenter Content or Portal 12c: If not now, when? appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Protecting Financial Data with Oracle WebCenter and Adobe LiveCycle

For over 15 years, Oracle WebCenter has been used by organizations to store, manage, and retain their high-value content. During that time, Fishbowl has helped customers leverage the system to solve many common and unique content management problems. We want to share some of those success stories with you, with the hope that they will help you form new ideas on how to further leverage WebCenter in your organization. Starting today, we will be publishing an “Oracle WebCenter case study of the week “. These case studies will highlight the ways customers are using WebCenter to solve their business problems and drive new process efficiencies.

This week’s customer case study details a global manufacturer of aluminum rolled products. This company came to Fishbowl in search of a solution to make access to payroll information much more available to employees and financial officers, as well as secure the information provided. Fishbowl utilized Oracle WebCenter Imaging & Capture and Adobe LiveCycle to satisfy this content management use case, and also help the customer save around $75,000.

Business Drivers
  • Reduce costly distribution processes involving printing and mailing over 30,000 pages of reports per year.
  • Make access to payroll information much more readily available to employees and financial auditors.
  • Ensure payroll data stored in Oracle WebCenter is highly secure.
Solution Summary
  • Fishbowl implemented WebCenter Capture and Imaging to scan and manage over a dozen types payroll-related reports including payroll closing, direct deposits, W-4s, and garnishments.
  • Imaged documents output to directory where security policies are applied using Adobe Live Cycle’s Information Rights module. This further ensures unauthorized document access.
  • Documents with security information uploaded and stored in existing Oracle WebCenter Content instance and available for viewing by authenticated users.
Oracle WebCenter and Adobe LiveCycle

Document flow from capture with WebCenter to securing content with Adobe Information Rights Mangement.

Customer Benefits
  • Reduced estimated yearly cost of $75,000 to print and mail over 30,000 payroll-related documents.
  • Ensured that sensitive employee data cannot be seen by unauthorized users.
  • Created a much more accessible and simple Payroll processing system to manage and retain the company’s 16,000+ documents.

 

The post Protecting Financial Data with Oracle WebCenter and Adobe LiveCycle appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

The Future of Content Management: Oracle Content & Experience Cloud

What is the Content and Experience Cloud?

Content and Experience Cloud (CEC) is Oracle’s cloud platform solution for content management and delivery. It brings together Oracle’s Documents Cloud Service (Content) and Oracle’s Sites Cloud Service (Experience) to make a centrally managed platform for your business to contribute, collaborate, and share its content. It sets out to solve many of the headaches associated with content management solutions of the past and present, including:

  • Poor user experience
  • Security concerns
  • Limited access to content and collaboration

This can be drawn as a parallel to Oracle’s motto throughout marketing their Documents Cloud Service: “Simple, Secure, Everywhere”.

In this post, I’m going to detail how Content and Experience Cloud meets each of these challenges, describing some of the features available. I’ll also give an overview of some of the custom development efforts I’ve achieved in the past few weeks, and what kind of enterprise applications could be developed using similar approaches.

Solving the Problems with Traditional Content Management Systems – Including Oracle WebCenter.

User Experience – Low user adoption and poor user experience have been major challenges facing legacy content management systems. Oracle Content & Experience Cloud aims to remedy some of these problems in a number of ways.

  • Mobile, tablet, and desktop access:
    • Oracle adopted a mobile-forward design pattern for CEC interfaces to adjust for devices that can be used anywhere.
    • View, edit, and manage files from any of these devices with the applications Oracle has provided. All desktop and mobile application downloads can be found together on the “Download Apps” page of your CEC service interface, while mobile apps can also be found on both major mobile app markets (Android App and iPhone App).
  • Share files or folders simply, with the ability to assign access levels to limit what can be done to the content.
  • Conversations can be started about folders, files, or a separate topic altogether.
    • Annotations can be made on specific parts of a document.
    • Documents can be attached to conversations.
    • Conversations can be accessed from the web, desktop, and mobile apps.
  • Integrations exist out of the box with programs like Microsoft Word and Excel for syncing documents or spreadsheets to the cloud. A UI overlay will appear on the program, visually confirming the document as it syncs to the cloud, and expands to provide users actions like viewing content access history and versions, starting or viewing the document’s conversation, or sharing the document with other members or with anyone by generating public links. Additional actions will also exist in the file menus, allowing users to manage nearly everything about their documents without needing to leave the editor.

Security – A concern of many businesses considering cloud content management is the safety of their files. Oracle secures files through a multi-layered approach.

  • Access to the CEC service requires a username and password managed by a service administrator.
  • Files are encrypted through SSL while in storage and transit to the cloud.
  • Content owners have access control to the content and folders, which can be customized for different tiers of access. Users who are given access to a file in a folder will not have access to the other files that exist within the folder.
  • Service admins have the option to configure virus scans on files upon upload to the cloud. Infected files will be quarantined from the system.
  • Passcodes can be set for mobile devices accessing the cloud. Any files downloaded from the cloud will additionally require authentication to the CEC app in order to be decrypted.
  • Websites can have security applied to control:
    • Individual user/group membership to the published site.
    • Who can see the site when it is (un)published.
    • Who can see or interact with secured content on the site.
  • CEC also include access to analytics, auditing and automatic backups.

Access to Content, and Collaboration – Productivity can suffer when content is difficult to access, or hard to find. Content and Experience Cloud provides availability to content anywhere, with streamlined methods of sharing and collaboration.

  • The CEC interface gives users the ability to rapidly collaborate internally or externally by sharing content with other members, or creating public links to folders or files.
  • Mobile, tablet, and desktop access out of the box allows users to view and manage content on the go.
  • Content can be worked on without internet access, and can be synced to the cloud once you regain connectivity.
  • Workflow and review processes allow content to easily and efficiently get published.
  • Conversations allow users to comment on files, folders, or digital assets (including the ability to highlight and annotate specific areas of text, and attach files to your comments).
Customizing Your Experience

Oracle provides several expanding development resources that can be used to customize sites on CEC. The modular structure of the site components, and use of modern web libraries and frameworks like RequireJS, KnockoutJS, and Mustache templating help streamline the process of site development, and create a more responsive and rich experience for the end user. I’ve developed a couple proof of concept examples which can serve as a stepping stone to custom enterprise components that are either static, or dynamically accessing files housed in the cloud service.

Custom Component #1: Update Static Content without Coding

Using some of Oracle’s development documentation as a base, the first component I created demonstrates the ability to update static page content through custom settings without touching the code. It utilizes the SitesSDK, which provides a set of functions to integrate custom components with the Content and Experience Cloud. These functions are particularly helpful in providing storage and retrieval of custom settings used to configure components on the page.

For example, when the component is first set on the page, it will load the default settings values, and render them to the template. While editing the site, you can access the settings in the dropdown menu located on the top right of the component.

Custom settings were defined for each of the titles and descriptions of the tile elements. By simply updating the input text for each of these fields in the form and pressing enter, the values update immediately on the component within the page. Moreover, when I am happy with the changes I can click “Save” and “Publish”, and those settings will be published to the site and persist for everyone until they need to be changed again. Anyone with permissions to edit the site would be able to update these values in a matter of seconds and publish the changes without any outages. You can see that updating the “Title 1” field to the value “My Title”, and the “Text 1” field to the value “My Description” will update the first tile within the component.

To demonstrate another use of custom settings, I’ve integrated a filepicker that allows the user to navigate files stored in the cloud, and select image to be displayed in the component on the page. Data returned by the SitesSDK can also give us some information on the image, which may be useful depending on the demands of your component. The image, and information about the image will also display immediately on the component so the editor of the site has a preview of the site with the updated component before publishing it to the site for everyone to see.

Custom settings provide a great way to manage elements of a page on your site that occasionally need manual changes, and don’t necessarily need to rely on pulling content dynamically from the cloud or another source. It gives site managers flexibility to make changes on the fly, and keep the site fresh and current for its audience.

Custom Component #2: Browser for Cloud Content

The second component I created utilizes Oracle’s Content Management API to build a content browser which displays previews, information, and actions on content living in the cloud. The API provides multiple endpoints to allow viewing, creating, modifying, and deleting folders and files. It can also retrieve information on users in the system. Oracle is working to extend the number and functionality of these endpoints in future releases.

In the above screenshot, you can see the documents view from the CEC interface, and the files that live in the “images” folder. Below is the screenshot of the custom component which grabs all of this information, and renders it to the site. The data returned in the responses make it possible to call for thumbnails of images and documents, as well as build actions like “View” and “Download” to open the full file on the CEC interface, or download the file respectively. This functionality can be used to create components that grab content dynamically and display it to your site as it is contributed to the cloud.

With an enterprise-level account, content administrators will have the ability to define their own structured content with access to Content Types, Content Items, Content Layouts, and Digital Assets. This allows the design of content specific to your business, and opens the door to develop components like a news feed which filters and displays only news content items in a widget on the page, or a search form which can return content filtered on any number of criteria.

Conclusion & Looking to the Future: Integrating with On-Premise and other Back-Office Applications

Content and Experience Cloud provides an ideal platform for content management in the cloud. It aggregates content, digital assets, conversations, and sites to a single location, where power users can delegate access to the people who need it, anywhere. Surface your content to sites on the cloud using custom components to build an interface that works for your business. Make updates quickly to provide always-current information without modifying site code, or taking the system offline. Oracle continues to improve and expand on the API endpoints and other development materials with future releases.

I will be working to integrate some of Fishbowl Solutions’ SPA taskflows into custom components for display on CEC Sites similar to what I’ve shown in the previous section, except the taskflow code will be hooked into an existing on-premise WebCenter Content instance to serve back content housed in a locally managed database rather than the Document Cloud Service. This will provide options to businesses looking to transition to the cloud service for benefits like site servers being hosted on the cloud, simple site/component management, and near-instant publishing, while still maintaining all the same content on-prem.

Another integration planned for future development is integration with the AuraPlayer service. AuraPlayer provides the ability to wrap existing Oracle Forms/EBS systems as web services which can eventually be surfaced on a Content and Experience Cloud site as a modern, mobile-friendly, responsive UI. With CEC already accessible by tablet and mobile devices, it stands out as a strong platform candidate.

The post The Future of Content Management: Oracle Content & Experience Cloud appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Analytics on the edge?

DBMS2 - Fri, 2017-06-30 03:27

There’s a theory going around to the effect that:

  • Compute power is and will be everywhere, for example in cars, robots, medical devices or microwave ovens. Let’s refer to these platforms collectively as “real-world appliances”.
  • Much more data will be created on these platforms than can reasonably be sent back to centralized/cloudy servers.
  • Therefore, cloud-centric architectures will soon be obsolete, perhaps before they’re ever dominant in the first place.

There’s enough truth to all that to make it worth discussing. But the strong forms of the claims seem overblown.

1. This story doesn’t even make sense except for certain new classes of application. Traditional business applications run all over the world, in dedicated or SaaSy modes as the case may be. E-commerce is huge. So is content delivery. Architectures for all those things will continue to evolve, but what we have now basically works.

2. When it comes to real-world appliances, this story is partially accurate. An automobile is a rolling network of custom Linux systems, each running hand-crafted real-time apps, a few of which also have minor requirements for remote connectivity. That’s OK as far as it goes, but there could be better support for real-time operational analytics. If something as flexible as Spark were capable of unattended operation, I think many engineers of real-world appliances would find great ways to use it.

3. There’s a case to be made for something better yet. I think the argument is premature, but it’s worth at least a little consideration. 

There are any number of situations in which decisions are made on or about remote systems, based on models or rules that should be improved over time. For example, such decisions might be made in:

  • Machine vision or other “recognition”-oriented areas of AI.
  • Detection or prediction of malfunctions.
  • Choices as to what data is significant enough to ship back upstream.

In the canonical case, we might envision a system in which:

  • Huge amounts of data are collected and are used to make real-time decisions.
  • The models are trained centrally, and updated remotely over time as they are improved.
  • The remote systems can only ship back selected or aggregated data to help train the models.

This all seems like an awkward fit for any common computing architecture I can think of.

But it’s hard to pin down important examples of that “canonical” case. The story implicitly assumes:

  • A model is widely deployed.
  • The model does a decent job but not a perfect one.
  • Based on its successes and failures, the model gets improved.

And now we’re begging a huge question: What exactly is there that keeps score as to when the model succeeds and fails? Mathematically speaking, I can’t imagine what a general answer would be like.

4. So when it comes to predictive models executed on real-world appliances I think that analytic workflows will:

  • Differ for different (categories) of applications.
  • Rely in most cases on simple patterns of data movement, such as:
    • Stream everything to central servers and sort it out there, or if that’s not workable …
    • … instrument a limited number of test nodes to store everything, and recover the data in batch for analysis.
    • Update models only in timeframes that you’re doing a full app update/refresh.

And with that much of the apparent need for fancy distributed analytic architectures evaporates.

5. Finally, and notwithstanding the previous point: Across many use cases, there’s some kind of remote log data being shipped back to a central location. It may be the complete log. It may be periodic aggregates. It may happen only what the edge nodes regard as significant events. But something is getting shipped home.

The architectures for shipping, receiving and analyzing such data are in many cases immature. That’s obvious if there’s any kind of streaming involved, or if analysis is done in Spark. Ditto if there’s anything we might call “non-tabular business intelligence”. As this stuff matures, it will in many cases fit very well with today’s cloud thinking. But in any case — it needs to mature.

Truth be told, even the relational case is immature, in that it can easily rely on what I called:

data warehouses (perhaps really data marts) that are updated in human real-time

That quote is from a recent post about Kudu, which:

  • Is designed for exactly that use case.
  • Went GA early this year.

As always, technology is in flux.

Related links

Categories: Other

Generally available Kudu

DBMS2 - Fri, 2017-06-16 10:52

I talked with Cloudera about Kudu in early May. Besides giving me a lot of information about Kudu, Cloudera also helped confirm some trends I’m seeing elsewhere, including:

  • Security is an ever bigger deal.
  • There’s a lot of interest in data warehouses (perhaps really data marts) that are updated in human real-time.
    • Prospects for that respond well to the actual term “data warehouse”, at least when preceded by some modifier to suggest that it’s modern/low-latency/non-batch or whatever.
    • Flash is often — but not yet always — preferred over disk for that kind of use.
    • Sometimes these data stores are greenfield. When they’re migrations, they come more commonly from analytic RDBMS or data warehouse appliance (the most commonly mentioned ones are Teradata, Netezza and Vertica, but that’s perhaps just due to those product lines’ market share), rather than from general purpose DBMS such as Oracle or SQL Server.
  • Intel is making it ever easier to vectorize CPU operations, and analytic data managers are increasingly taking advantage of this possibility.

Now let’s talk about Kudu itself. As I discussed at length in September 2015, Kudu is:

  • A data storage system introduced by Cloudera (and subsequently open-sourced).
  • Columnar.
  • Updatable in human real-time.
  • Meant to serve as the data storage tier for Impala and Spark.

Kudu’s adoption and roll-out story starts:

  • Kudu went to general availability on January 31. I gather this spawned an uptick in trial activity.
  • A subsequent release with some basic security features spawned another uptick.
  • I don’t think Cloudera will mind my saying that there are many hundreds of active Kudu clusters.
  • But Cloudera believes that, this soon after GA, very few Kudu users are in actual production.

Early Kudu interest is focused on 2-3 kinds of use case. The biggest is the kind of “data warehousing” highlighted above. Cloudera characterizes the others by the kinds of data stored, specifically the overlapping categories of time series — including financial trading — and machine-generated data. A lot of early Kudu use is with Spark, even ahead of (or in conjunction with) Impala. A small amount has no relational front-end at all.

Other notes on Kudu include:

  • Solid-state storage is recommended, with a few terabytes per node.
  • You can also use spinning disk. If you do, your write-ahead logs can still go to flash.
  • Cloudera said Kudu compression ratios can be as low as 2-5X, or as high as 10-20X. With that broad a range, I didn’t drill down into specifics of what they meant.
  • There seem to be a number of Kudu clusters with 50+ nodes each. By way of contrast, a “typical” Cloudera customer has 100s of nodes overall.
  • As you might imagine from their newness, Kudu security features — Kerberos-based — are at the database level rather than anything more granular.

And finally, the Cloudera folks woke me up to some issues around streaming data ingest. If you stream data in, there will be retries resulting in duplicate delivery. So your system needs to deal with those one way or another. Kudu’s way is:

  • Primary keys will be unique. (Note: This is not obvious in a system that isn’t an entire RDBMS in itself.)
  • You can configure the uniqueness to be guaranteed either through an upsert mechanism or just by simply rejecting duplicates.
  • Alternatively, you can write code to handle duplication errors, e.g. via Spark.
Categories: Other

The data security mess

DBMS2 - Wed, 2017-06-14 08:21

A large fraction of my briefings this year have included a focus on data security. This is the first year in the past 35 that that’s been true.* I believe that reasons for this trend include:

  • Security is an important aspect of being “enterprise-grade”. Other important checkboxes have been largely filled in. Now it’s security’s turn.
  • A major platform shift, namely to the cloud, is underway or at least being planned for. Security is an important thing to think about as that happens.
  • The cloud even aside, technology trends have created new ways to lose data, which security technology needs to address.
  • Traditionally paranoid industries are still paranoid.
  • Other industries are newly (and rightfully) terrified of exposing customer data.
  • My clients at Cloudera thought they had a chance to get significant messaging leverage from emphasizing security. So far, it seems that they were correct.

*Not really an exception: I did once make it a project to learn about classic network security, including firewall appliances and so on.

Certain security requirements, desires or features keep coming up. These include (and as in many of my lists, these overlap):

  • Easy, comprehensive access control. More on this below.
  • Encryption. If other forms of security were perfect, encryption would never be needed. But they’re not.
  • Auditing. Ideally, auditing can alert you to trouble before (much) damage is done. If not, then it can at least help you do proactive damage control in the face of breach.
  • Whatever regulators mandate.
  • Whatever is generally regarded as best practices. Security “best practices” generally keep enterprises out of legal and regulatory trouble, or at least minimize same. They also keep employees out of legal and career trouble, or minimize same. Hopefully, they even keep data safe.
  • Whatever the government is known to use. This is a common proxy for “best practices”.

More specific or extreme requirements include: 

I don’t know how widely these latter kinds of requirements will spread.

The most confusing part of all this may be access control.

  • Security has a concept called AAA, standing for Authentication, Authorization and Accounting/Auditing/Other things that start with”A”. Yes — even the core acronym in this area is ill-defined.
  • The new standard for authentication is Kerberos. Or maybe it’s SAML (Security Assertion Markup Language). But SAML is actually an old, now-fragmented standard. But it’s also particularly popular in new, cloud use cases. And Kerberos is actually even older than SAML.
  • Suppose we want to deny somebody authorization to access certain raw data, but let them see certain aggregated or derived information. How can we be sure they can’t really see the forbidden underlying data, except through a case-by-case analysis? And if that case-by-case analysis is needed, how can the authorization rules ever be simple?

Further confusing matters, it is an extremely common analytic practice to extract data from somewhere and put it somewhere else to be analyzed. Such extracts are an obvious vector for data breaches, especially when the target system is managed by an individual or IT-weak department. Excel-on-laptops is probably the worst case, but even fat-client BI — both QlikView and Tableau are commonly used with local in-memory data staging — can present substantial security risks. To limit such risks, IT departments are trying to impose new standards and controls on departmental analytics. But IT has been fighting that war for many decades, and it hasn’t won yet.

And that’s all when data is controlled by a single enterprise. Inter-enterprise data sharing confuses things even more. For example, national security breaches in the US tend to come from government contractors more than government employees. (Ed Snowden is the most famous example. Chelsea Manning is the most famous exception.) And as was already acknowledged above, even putting your data under control of a SaaS vendor opens hard-to-plug security holes.

Data security is a real mess.

Categories: Other

Light-touch managed services

DBMS2 - Wed, 2017-06-14 08:14

Cloudera recently introduced Cloudera Altus, a Hadoop-in-the-cloud offering with an interesting processing model:

  • Altus manages jobs for you.
  • But you actually run them on your own cluster, and so you never have to put your data under Altus’ control.

Thus, you avoid a potential security risk (shipping your data to Cloudera’s service). I’ve tentatively named this strategy light-touch managed services, and am interested in exploring how broadly applicable it might or might not be.

For light-touch to be a good approach, there should be (sufficiently) little downside in performance, reliability and so on from having your service not actually control the data. That assumption is trivially satisfied in the case of Cloudera Altus, because it’s not an ordinary kind of app; rather, its whole function is to improve the job-running part of your stack. Most kinds of apps, however, want to operate on your data directly. For those, it is more challenging to meet acceptable SLAs (Service-Level Agreements) on a light-touch basis.

Let’s back up and consider what “light-touch” for data-interacting apps (i.e., almost all apps) would actually mean. The basics are: 

  • The user has some kind of environment that manages data and executes programs.
  • The light-touch service, running outside this environment, spawns one or more app processes inside it.
  • Useful work ensues …
  • … with acceptable reliability and performance.
  • The environment’s security guarantees ensure that data doesn’t leak out.

Cases where that doesn’t even make sense include but are not limited to:

  • Transaction-processing applications that are carefully tuned for efficient database access.
  • Applications that need to be carefully installed on or in connection with a particular server, DBMS, app server or whatever.

On the other hand:

  • A light-touch service is at least somewhat reasonable in connection with analytics-oriented data-management-plus-processing environments such as Hadoop/Spark clusters.
  • There are many workloads over Hadoop clusters that don’t need efficient database access. (Otherwise Hive use would not be so prevalent.)
  • Light-touch efforts seem more likely to be helped than hurt by abstraction environments such as the public cloud.

So we can imagine some kind of outside service that spawns analytic jobs to be run on your preferred — perhaps cloudy — Hadoop/Spark cluster. That could be a safe way to get analytics done over data that really, really, really shouldn’t be allowed to leak.

But before we anoint light-touch managed services as the NBT (Next Big Thing/Newest Bright Thought), there’s one more hurdle for it to overcome — why bother at all? What would a light-touch managed service provide that you wouldn’t also get from installing packaged software onto your cluster and running it in the usual way? The simplest answer is “The benefits of SaaS (Software as a Service)”, and so we can rephrase the challenge as “Which benefits of SaaS still apply in the light-touch managed service scenario?”

The vendor perspective might start, with special cases such as Cloudera Altus excepted:

  • The cost-saving benefits of multi-tenancy mostly don’t apply. Each instance winds up running on a separate cluster, namely the customer’s own. (But that’s likely to be SaaS/cloud itself.)
  • The benefits of controlling your execution environment apply at best in part. You may be able to assume the customer’s core cluster is through some cloud service, but you don’t get to run the operation yourself.
  • The benefits of a SaaS-like product release cycle do mainly apply.
    • Only having to support the current version(s) of the product is a little limited when you don’t wholly control your execution environment.
    • Light-touch doesn’t seem to interfere with the traditional SaaS approach of a rapid, incremental product release cycle.

When we flip to the user perspective, however, the idea looks a little better.

Bottom line: Light-touch managed services are well worth thinking about. But they’re not likely to be a big deal soon.

Categories: Other

Cloudera Altus

DBMS2 - Wed, 2017-06-14 08:12

I talked with Cloudera before the recent release of Altus. In simplest terms, Cloudera’s cloud strategy aspires to:

  • Provide all the important advantages of on-premises Cloudera.
  • Provide all the important advantages of native cloud offerings such as Amazon EMR (Elastic MapReduce, or at least come sufficiently close to that goal.
  • Benefit from customers’ desire to have on-premises and cloud deployments that work:
    • Alike in any case.
    • Together, to the extent that that makes use-case sense.

In other words, Cloudera is porting its software to an important new platform.* And this port isn’t complete yet, in that Altus is geared only for certain workloads. Specifically, Altus is focused on “data pipelines”, aka data transformation, aka “data processing”, aka new-age ETL (Extract/Transform/Load). (Other kinds of workload are on the roadmap, including several different styles of Impala use.) So what about that is particularly interesting? Well, let’s drill down.

*Or, if you prefer, improving on early versions of the port.

Since so much of the Hadoop and Spark stacks is open source, competition often isn’t based on core product architecture or features, but rather on factors such as:

  • Ease of management. This one is nuanced in the case of cloud/Altus. For starters:
    • One of Cloudera’s main areas of differentiation has always been Cloudera Manager.
    • Cloudera Director was Cloudera’s first foray into cloud-specific management.
    • Cloudera Altus features easier/simpler management than Cloudera Director, meant to be analogous to native Amazon management tools, and good-enough for use cases that don’t require strenuous optimization.
    • Cloudera Altus also includes an optional workload analyzer, in slight conflict with other parts of the Altus story. More on that below.
  • Ease of development. Frankly, this rarely seems to come up as a differentiator in the Hadoop/Spark world, various “notebook” offerings such as Databricks’ or Cloudera’s notwithstanding.
  • Price. When price is the major determinant, Cloudera is sad.
  • Open source purity. Ditto. But at most enterprises — at least those with hefty IT budgets — emphasis on open source purity either is a proxy for price shopping, or else boils down to largely bogus concerns about vendor lock-in.

Of course, “core” kinds of considerations are present to some extent too, including:

  • Performance, concurrency, etc. I no longer hear many allegations of differences in across-the-board Hadoop performance. But the subject does arise in specific areas, most obviously in analytic SQL processing. It arises in the case of Altus as well, in that Cloudera improved in a couple of areas that it concedes were previously Amazon EMR advantages, namely:
    • Interacting with S3 data stores.
    • Spinning instances up and down.
  • Reliability and data safety. Cloudera mentioned that it did some work so as to be comfortable with S3’s eventual consistency model.

Recently, Cloudera has succeeded at blowing security up into a major competitive consideration. Of course, they’re trying that with Altus as well. Much of the Cloudera Altus story is the usual — rah-rah Cloudera security, Sentry, Kerberos everywhere, etc. But there’s one aspect that I find to be simple yet really interesting:

  • Cloudera Altus doesn’t manage data for you.
  • Rather, it launches and manages jobs on a separate Hadoop cluster.

Thus, there are very few new security risks to running Cloudera Altus, beyond whatever risks are inherent to running any version of Hadoop in the public cloud.

Where things get a bit more complicated is some features for workload analysis.

  • Cloudera recently introduced some capabilities for on-the-fly trouble-shooting. That’s fine.
  • Cloudera has also now announced an offline workload analyzer, which compares actual metrics computed from your log files to “normal” ones from well-running jobs. For that, you really do have to ship information to a separate cluster managed by Cloudera.

The information shipped is logs rather than actual query results or raw data. In theory, an attacker who had all those logs could conceivably make inferences about the data itself; but in practice, that doesn’t seem like an important security risk at all.

So is this an odd situation where that strategy works, or could what we might call light-touch managed services turn out to be widespread and important? That’s a good question to address in a separate post.

Categories: Other

A Sneak Peek at Oracle’s Chatbot Cloud Service and 5 Key Factors Necessary for Bot ROI

In early May, I flew out to Oracle HQ in San Francisco for an early look at their yet-to-be released Oracle Intelligent Bots Service.  The training left me ecstatic that the technology to quickly build great chatbots is finally here. However, the question remains, can chatbots provide real value for your business?

What is a chatbot?

A chatbot is a program that simulates a conversation partner over a messaging app. It can integrate with any kind of messaging client, such as Facebook, WeChat, WhatsApp, Slack, Skype, or you could even build your own client. If you’ve been following our blog, you may have already seen the chatbot (Atlas) we built as part of our annual hackathon.

Here is an example conversation I had with Atlas recently:

Chatbot Conversations

Chatbots use Natural Language Processing and Machine Learning algorithms to take what the user said and match it up against pre-defined conversations. Understanding how chatbots recognize phrases can help determine what conversations a user could have with a bot. Here is some chatbot terminology:

  • An intent is something the users wants, and the bot maps this to an action. For example, the user might want to say some form of “Hi” to the bot, and we would want the bot to respond with a random greeting. A chatbot generally has up to 2,000 intents.
  • Utterances are examples of different phrases that represent an intent. An intent might have 10-15 utterances. The bot will be able to match statements similar to those utterances to the intent, but what a user says doesn’t have to exactly match an utterance. This is where the language processing algorithms are used.
  • Entities are key variables the bot can parse from the intent.

Suppose we are building an HR chatbot that can help users reset passwords. The goal is for our bot to understand that the user needs a password reset link, and then send the correct link to the user. Our intent could be called Password Reset. Since the user could have accounts for different services, we would need to create an entity called AccountType for our bot to parse from what the user said. AccountType could map to “Gitlab”, “WebCenter”, or “OpenAir”.

As a rough design, we could start with:

  • Intent: Password Reset
  • Utterances:
    • I’d like to reset my password.
    • How do I change my password for Gitlab?
    • I forgot my WebCenter pw, can you help?
    • Please assist me in receiving a new password.
    • Forgot my passcode for OpenAir.
    • Give me another password.
  • Entity: AccountType (Gitlab, WebCenter, OpenAir)

Intents like this one will need to be set up for a bot to know what to do when a user says something. If a user asks the bot a question it doesn’t have an intent for, it won’t know what to do and the user will get frustrated. Our bot still won’t know how to order a pizza, but it could help with password resets.

Key Factor #1: Chatbots should have a purpose

A chatbot can only answer questions it is designed to answer. If I was building an HR Help chatbot, it probably would not be able to order a pizza, rent a car for you, or check the weather. It could, for example, reset passwords, report harassment, set up a new hire, and search for policies. Once the requirements are set, developers can build, design, and test to ensure the bot has those capabilities.

This makes it important to set expectations with the user on what types of questions they can ask it, without giving the user a list of questions. Introducing a bot along with its purpose will help with this. For example, we could have the HR Help Bot, the Travel Planning bot, or the Sales Rep Info bot. If we introduced the Fishbowl Ask-Me-Anything bot, users will start asking it a lot of questions we didn’t plan for it to be able to answer.

Conversations can be more complicated than a simple back and forth, or question and answer. The capability is there (Oracle’s solution gives developers full control over a Conversational State Machine), but I have yet to explore the full capabilities.

Once a purpose and a set of intents are identified, a chatbot could be a useful tool to engage customers or employees.

Key Factor #2: Design Architecture

Bots are great for interacting with difference services. Oracle Intelligent Bot Service is designed to make it easy for developers to make REST API calls and database lookups in between parsing what the user says, and returning a response.

Here are a few things to think about when designing a bot’s architecture:

  • Integrations: What services will the bot interact with?
  • Security: Are users typing their bank account number over Facebook chat?
  • Human interaction: How will the bot flip users over to a human to help when they get frustrated?
  • Infrastructure: What will be on premise and what will be in the cloud?
  • Performance: How to minimize network requests?
Key Factor #3: Analytics

Analytics can be used to improve the bot’s capability over time and understand the impact on the company. Some companies may already have metrics around help desk call volume or customer conversion rates, and it would be interesting to compare that data from before and after a bot’s release.

Beyond that, bot analytics will be able to show the performance of the bot. Analytics could show the top questions a bot is asked but can’t answer, how many questions it answers successfully each day, and what questions it mistook for something else. Oracle’s chatbot solution will have some capabilities built in, and the platform is so flexible it will be possible to gather any data about a bot.

Key Factor #4: Bot Building Best Practices

There is a lot to do when it comes to building the bot. From setting up the infrastructure, connecting all the services, and filling out all the utterances. There are some best practices to keep in mind as well.

The bot should sound like a human. Personality can play a big role in giving users a better interaction.

As users become more familiar with chatbots, there will also be a set of questions they expect every bot to be able to answer. This list might start with:

  • Hi.
  • What do you do?
  • Are you human?
  • Help!
  • Tell me a joke.
  • How are you?

When the bot is going to run a query or API that may take a while, it is important to warn the user in advance and echo that the bot understood what the user wanted. Some apps will also support “is typing” statuses, which is another great way to show the bot is thinking.

Key Factor #5: Testing

Users have high expectations for the intelligence level of a chatbot. They expect the Machine Learning algorithms to work well, and the bot to seem smart. If the bot doesn’t meet their expectations on the first try, they are unlikely to use the bot in the future.

Testing and tuning utterances can make the difference for making a bot seem smart. The bot should be able to accurately map what a user says to the correct intent. Oracle’s chatbot solution has some nice testing capabilities around utterances and intents, and making sure what the users says maps correctly.

Chatbots are another piece of software, so it is important to do performance and user testing on it as well.

Conclusion

Chatbots are a great way to tie in a single user interface to a large variety of services, or automate repetitive conversations. There are plenty of business use cases that would benefit from a chatbot, but the ROI depends on thorough requirements gathering and using analytics to optimize the bot. That being said, companies that have already started down the path – like this Accounting Firm in Minneapolis – are seeing benefits from bots automating manual processes leading to a reduction in operating costs by 25 to 40%. Savings like this will vary across use case and industry, but overall the automation gains from a bot are there regardless of what the bot is being used for. We would love to discuss your ideas on how a chatbot could help your business. Leave a comment or contact us with any questions.

The post A Sneak Peek at Oracle’s Chatbot Cloud Service and 5 Key Factors Necessary for Bot ROI appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Unboxing the Future of Fishbowl’s On-premise Enterprise Search Offering: Mindbreeze InSpire

Back on April 3rd, Fishbowl announced that we had formed a partner relationship with Mindbreeze to bring their industry leading enterprise search solutions to Fishbowl customers. We will offer their Mindbreeze InSpire search appliance to customers looking for an on-premise solution to search internal file shares, databases, document management systems and other enterprise repositories.

Since that announcement, we have been busy learning more about Mindbreeze InSpire, including sending some members of our development team to their partner technical training in Linz, Austria. This also includes procuring our own InSpire search appliance  so that we can begin development of connectors for Oracle WebCenter Content and PTC Windchill. We will also begin using InSpire as the search system for our internal content as well.

Fishbowl’s Mindbreeze InSpire appliance arrived last week, and we wanted to share a few pics of the unboxing and racking process. We are very excited about the value that Mindbreeze InSpire will bring to customers, including the time savings of searching, and in many cases not finding, high-value information. Consider these stats:

  • 25% of employee’s time is spent looking for information – AIIM
  • 50% of people need to search 5 or more sources – AIIM
  • 38% of time is spent unsuccessfully searching and recreating content – IDC

Stay tuned for more information on Fishbowl’s software and services for Mindbreeze InSpire. Demos of the system are available today, so contact us below or leave a comment here if you would like to see it in action.

 

 

The post Unboxing the Future of Fishbowl’s On-premise Enterprise Search Offering: Mindbreeze InSpire appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

How to Configure Microsoft IIS with Oracle WebCenter

I was setting up a Oracle WebCenter 12c Suite in a local development environment utilizing a Windows Server 2012 R2 Operating System with a Microsoft SQL Server. Instead of using a OHS (Oracle HTTP Server), I wanted to try using Microsoft IIS (Internet Information Services) to handle the forwarding of sub-sites to the specified ports.  Since the Oracle applications run specified ports (ex. 16200 for Content Server), when a user requests the domain on the default ports (80 and 443) on browsers it won’t redirect to the content server – example: www.mydomain.com/cs vs. www.mydomain.com:16200/cs. The reason I chose to use IIS was because it is already a feature built-in to Windows Server, and thus is one less application to manage.

That being said, IIS and OHS perform in the same manner but are setup and configured differently based on requirements.  Oracle provides documentation about using the Oracle Plug-in for Microsoft IIS, but the content is pretty outdated on the Oracle site.  The page first references IIS 6.0, which was released with Windows Server 2003 in April 2003.  It has now ended its support as of July 14th, 2015. Lower on the page, they show steps for IIS on Windows Server 2012 R2, which got me started.  In the next part of this post, I will review the steps I took to get all functionality working, as well as the limitations/flaws I incurred.

Step 1: Install IIS on the Server

The first part was to install IIS on the server.  In Server 2012, open the Server Manager and select Add Roles and Features.  From there select the option to add the IIS components.

Step 2: Select Default Web Site

Once IIS has been installed, open it and select the Default Web Site.  If you right-click and select edit bindings, you can see the default site is binded to port 80, which is what we want since port 80 is the default port for all web applications.

Step 3: Select Application Pools

Following the instructions from Oracle, download the plug-in and put it in the system folder close to the root level on the desired drive.  For this blog, I have it in C:\IISProxy\.  For each server (Content Server, Portal, etc) you need to perform configurations in IIS.  Open IIS and navigate to the Application Pools section.  Select Add Application Pool and create a pool with a specific name for each server.  There needs to be separate application pools for specific port forwarding to work correctly.

Step 4: Configure Properties

Once created, open Windows Explorer and create a folder inside IISProxy called “CS.”  Copy all he plug-in files into the CS folder.  Now open the iisproxy.ini file and configure the properties to match your environment.  Make sure to configure the Debug parameter accordingly to tailor on your environment.

Step 5: Select the Created Application Pool

Open IIS and select the Default Web Site option.  Right-click and select Add Application.  Add the Alias name and select the Application Pool created above.  Set the physical path to the folder created above and make sure the connection is setup for pass-through authentication.

Step 6: Set Up Handler Mappings

Once OK has been selected, the application should now be displayed on the tree on the left.  The next step is to setup handler mappings for how IIS will handle requests coming in.  Click on the “cs” application you just created and on the main display there should be a Handler Mappings icon to click. Double click the icon.  This is where we will setup the routing of static files vs content server requests. On the right side, click the “Add Script Map” icon.  Add the request path of “*” and add the folder path to the iisproxy.dll.  Open the request restrictions and verify the “Invoke handler…” checkbox is unchecked.  Open the access tab and select the Script radio button.  Click OK and verify the mapping has been applied.

    

Step 7: Map Static Files

Next, we will setup the mapping for static files.  Click “Add Module Mapping” Add “*” for the request path, “StaticFileModule,DefaultDocumentModule,DirectoryListingModule” for the Module and give it a name.  Open request restrictions and select the file or folder radio option.  Navigate to the access tab and select the read radio button.  Click OK and verify the mapping was applied.

  

Step 8: Verify Mapping Execution

After the mappings have been setup, we need to verify they are executed in the correct order.  Do this by going to the back to the handler mappings screen and clicking “View Ordered List”

Step 9: Restart the IIS Server

After these steps are completed, restart the IIS server.  To do this, open command-prompt as an administrator and type “iisreset”.  Once restarted, you now should be able to view the content server on port 80.  If you have other redirects you would like to perform, you can perform the same steps above with a different name (ex. Portal, Inbound Refinery, Console, Enterprise Manager, etc).

With Oracle’s tutorial out-of-date and missing key steps, it was difficult to determine how to set everything up.  With some trial and error and investigation, I think I outlined in the 9 steps above how to help you quickly setup IIS with the WebCenter Suite on a Windows environment so specific port numbers are not needed.  Obviously with any technology decision, application evaluations should take place to determine if IIS or OHS is a better fit. Good luck, and leave a comment if you have any questions or need further clarification.

The post How to Configure Microsoft IIS with Oracle WebCenter appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Pages

Subscribe to Oracle FAQ aggregator - Other