Feed aggregator

Protecting Your Empire and Shortermism

Tim Hall - Sat, 2016-10-22 07:06

bookshelf-29811_640Followers of the blog know that I’m far from being an expert at APEX, but I recently did an APEX presentation at work. As a follow-up I sent out an email with a bunch of links to online tutorials, YouTube playlists and documentation etc. One of my colleagues replied saying,

“It’s really wonderful having someone so knowledgeable who actually shares knowledge here as well as at these conferences !!!”

I was thinking about that comment this morning and it raised two questions in my mind.

  1. Do any people contribute to the online community and present at conferences, but not do knowledge spreading in their company?
  2. Does anyone still believe that withholding information for the purposes of protecting your own little empire is a successful strategy these days?

Regarding the first question, I think it would be pretty sad if people are doing knowledge spreading in the community, but not giving their colleagues the benefit of their experience. At minimum they could be pointing their colleagues to their community work, but it would be better if they could personalise it for their colleagues. In the case of my recent presentation, I used applications from work in my demos that I would never show at a conference. I think that helps put things into context.

The answer to the second question interests me a lot more. When I started in IT the internet as we know now didn’t exist. The only way to learn anything was using the manuals (typically out of date paper copies) or asking a colleague. At that point it was possible for people to protect their empire by hiding information, which I saw happen many times. Typically the people who did this were despised. What’s more, at the first opportunity they would be cut out of the mix for future projects, for fear of them expanding their empire of secrecy.

Fast forward to today and I can Google just about anything. The only thing you could possibly try to hide from me is company-specific information, but if your company allows you to do this they are fools.

Trying to protect your empire by hiding information stinks of shortermism. You may be successful in the short term, fooling people into believing you are indispensable, but in the long term they will realise what you are doing and you will fail. I’ve never been in a position where knowledge spreading and being open with information has lead to a negative result. Theoretically it makes you easier to replace, but in practice that is not the case. It allows people to see what you are doing, what else you are capable of doing and that you are not the sort of dick that will try to hold the company to ransom in the future.

Cheers

Tim…

PS. Please don’t ask me questions about APEX. I’m rubbish at it and I’m just going to point you to the OTN APEX Forum where the real experts play.

Protecting Your Empire and Shortermism was first posted on October 22, 2016 at 1:06 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Is Fixed Objects Stats needed in PDB?

Syed Jaffar - Sat, 2016-10-22 04:53
Doesn't matter if we are working on a new technology or something we are very family with. When we encounter tricky situations, sometimes neither we could find things in the manuals nor could find any useful references.

All what you need is an action by yourself and then become a reference to others .

Mike Dietrich had received an interesting question about whether Gather Fixed Objects required for an indiviudal PDBs or just required on the ROOT Container. He did a small test and explained the answer at oracle.blogs.

https://blogs.oracle.com/UPGRADE/entry/gather_fixed_objects_stats_in


Catch ORA-28002 in Oracle forms

Tom Kyte - Sat, 2016-10-22 04:46
Hello, How can I catch warning ora-28002 (password will expire within days) in Oracle forms 6i? I tried in several triggers but with no success. Is there a way to catch such warnings in Forms? Also for example the ora-28001 (expired) code. ...
Categories: DBA Blogs

commands execution sequence of sqlplus

Tom Kyte - Sat, 2016-10-22 04:46
dear Tom, I have command echo "exit" | sqlplus $CONDB @LONGSQL in AIX. Questions: 1. will sqlplus always execute exit after executing LONGSQL. 2. If there is no exit in LONGSQL?what is the better way to let sqlplus exit after executin...
Categories: DBA Blogs

Result of view, if base table is modified

Tom Kyte - Sat, 2016-10-22 04:46
Hi Tom, During a recent interview, I was asked that what happens to a view, if the base table is modified. For eg. I have a table emp with 3 columns viz eid, did, sal. I have created a view vw_emp as CREATE VIEW vw_emp AS SELECT * FROM emp; Thing...
Categories: DBA Blogs

ORAPWD file usage and its not using properly

Tom Kyte - Sat, 2016-10-22 04:46
Hi Tom, I dont know below is fault or am i missing something in the architecture. Iam using orapwd file , -- When connecting from OS level password will be taken from this file -- once i change / recreate the ORAPWD file its not working as ...
Categories: DBA Blogs

Extract domain names from a column having multiple email addresses

Tom Kyte - Sat, 2016-10-22 04:46
Hi Tom, i am trying to extract the domain names from a comma delimited email address and show them as comma delimited. i was successful to some extent where i am able to grab the domain name using REGEXP_SUBSTR and REGEXP_REPLACE and show them in...
Categories: DBA Blogs

Oracle OpenWorld 2016 - Data Integration Recap

Rittman Mead Consulting - Fri, 2016-10-21 16:33

I know it's been about a month since Oracle OpenWorld 2016 concluded, but I wanted to give a brief recap on a few things that I thought were interesting in the data integration space. During the week prior to OpenWorld, I had the privilege to attend the Oracle ACE Director Briefing. Over 2 days, ACE Directors were provided an early preview of what's to come down the Oracle product pipeline. The importance of the event is easy to note as Thomas Kurian himself spends an hour plus providing the initial product overview and answering questions. The caveat, the entire session is under NDA (as you might imagine). But, the good thing is that several of the early preview products were announced the following week at Oracle OpenWorld. Here's what I saw that might impact the Oracle Data Integration product area most.

Data Flow ML

Take an ETL tool, add the cloud, and mix in the latest Big Data technologies and methodologies and you have Data Flow ML. This cloud-based tool is built for stream or batch processing, or both, following the Lambda architecture. Data is ingested into Kafka topics, transformed using Spark Streaming, and loaded into a final target, which may be created automatically by DFML. Along the way, Spark ML is used to profile the data and make suggestions for how to enrich the data with internal or external sources. The technology is still in its early stages but keep an eye out on the Rittman Mead blog for more information over the next few months.

Data Integration in the Cloud

Oracle Data Integrator Cloud Service is coming soon and with it, new licensing options. ODI can be deployed in the cloud on Java Cloud Service or Big Data Cloud Service, or it can be deployed on-premises for more of a hybrid environment. From a licensing perspective, ODICS can be a monthly subscription or you can BYOL (bring your own license) and run ODI from any compute resource. This flexibility allows you to pushdown the transformation execution to the location of the data, rather than moving the data to the transformation engine - a standard for Oracle Data Integrator.

Oracle Data Integrator 12.2.1.2

Coming soon, the next patchset release for Oracle Data Integrator 12c. Features discussed at Oracle OpenWorld were:

  • Git Integration and Smart Merge:
    This release will introduce a new integration for lifecycle management, Git, adding to the current integration option of Subversion. Not only that, but ODI will finally provide "smart" merge functionality to allow an automated merge of a branch into the trunk.
  • Kafka and Spark Streaming:
    With ODI for Big Data, streaming integration is coming. Use of Apache Kafka as a source or target and Spark Streaming integration for transformations will allow for more real-time processing of data. The introduction of Cassandra as a data source is another enhancement for the Big Data release.
  • RESTful Connectivity:
    Another long awaited feature is REST web service integration. A new technology, similar to the SOAP web service integration, will be available and able to connect to any RESTful service. Along with that, BICS and Storage Cloud Service integration will be introduced.

There are definitely many other interesting looking products and product updates coming (or already here), including GoldenGate Service Architecture, updates to the GoldenGate Cloud Service, Big Data Cloud Service, Big Data Compute and several others. It’s an interesting time as the Oracle shift to the cloud continues - and data integration won’t be left behind.

Categories: BI & Warehousing

Elasticsearch for PeopleSoft Now Available!

PeopleSoft Technology Blog - Fri, 2016-10-21 13:05

We’ve been announcing for some time that Elasticsearch would be available for PeopleSoft, and that day has come!  Customers can now download a DPK from My Oracle Support and install and configure Elasticsearch for their PeopleSoft systems.  Elasticsearch is available for the PeopleTools 8.55.11 patch, and customers must be on PeopleTools 8.55.11 or higher to use Elasticsearch.  You can get the DPK from the PeopleTools Maintenance Page on MOS.  Elasticsearch DPKs are available for Linux and Windows.

There is also documentation to help with installation, deployment, and maintenance.  Visit the Elasticsearch Documentation Home Page on My Oracle Support.  If you are currently using Secure Enterprise Search (SES), we cover how to transition from SES to Elasticsearch.  If you are not using SES, we cover how to do a fresh install of Elasticsearch.  In the near future we will provide additional resources including Oracle University Training, a Spotlight Series video, and more.  Our Cumulative Feature Overview tool has been updated with Elasticsearch content.

All of our testing indicates that Elasticsearch will be significantly easier to install and maintain and will perform much better than SES both for indexing and results retrieval.  With this big improvement, we hope customers will take advantage of Elasticsearch and make searching an important part of their user experience.  Search can be especially valuable with the Fluid User Experience because the Fluid header—which includes the Search widget—is always available, so users will be able to initiate a search from any context and at any point of their process.

Note that Oracle Secure Enterprise Search (SES) will be supported until April 30, 2018, eighteen months after Elasticsearch is delivered in PeopleTools 8.55.11.

2 x ODA X6-2S + Dbvisit Standby: Easy DR in SE

Yann Neuhaus - Fri, 2016-10-21 11:28

What’s common with Standard Edition, simplicity, reliability, high performance, and affordable price?
Dbvisit standby can be an answer because it brings Disaster Recovery to Standard Edition without adding complexity
ODA Lite (the new X6-2S and 2M) is another answer because you can run Standard Edition in those new appliance.
So it makes sense to bring them together, this is what I did recently at a customer.

I’ll not cover the reasons and the results here as this will be done later. Just sharing a few tips to set-up the following configuration: two ODA X6-2S runnimg 12c Standard Edition databases, protected by Dbvisit standby over two datacenters.

ODA repository

ODA X6 comes with a new interface to provision databases from command line (odacli) or GUI (https://oda:7093/mgmt/index.html). It’s a layer over the tools we usually use: it calls dbca in behind. What it does in addition is to log what has been done in a Java DB repository.

What is done is logged in the opt/oracle/dcs/log/dcs-agent.log:
2016-10-13 15:33:59,816 DEBUG [Database Creation] [] c.o.d.a.u.CommonUtils: run: cmd= '[su, -, oracle, -c, export PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin:/u01/app/oracle/product/12.1.0.2/dbhome_2/bin; export ORACLE_SID=MYNEWDB; export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2; export PWD=******** /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/dbca -createDatabase -silent -gdbName MYNEWDB.das.ch -sid MYNEWDB -sysPassword ******* -systemPassword ******* -dbsnmpPassword ******* -asmSysPassword ******* -storageType ASM -datafileJarLocation /u01/app/oracle/product/12.1.0.2/dbhome_2/assistants/dbca/templates -emConfiguration DBEXPRESS -databaseConfType SINGLE -createAsContainerDatabase false -characterSet WE8MSWIN1252 -nationalCharacterSet AL16UTF16 -databaseType MULTIPURPOSE -responseFile NO_VALUE -templateName seed_noncdb_se2016-10-13_15-33-59.0709.dbc -initParams "db_recovery_file_dest_size=174080,db_unique_name=MYNEWDB" -recoveryAreaDestination /u03/app/oracle/fast_recovery_area/]'

Do I like it? Actually I don’t for two reasons. First reason is that I don’t want to learn new syntax every year. I know CREATE DATABASE from decades, I know DBCA for years. I just prefer to use those.
The second reason is that if you want to add a layer on something, you need to provide at least the same functionality and the same quality than the tool you call in behind. If you provide a command to create a database, then you must provide a command to delete it, even if the previous creation has failed. I’ve created a database which creation failed. The reason was that I changed the listener port, but the template explicitly sets local_listener to port 1521. Fortunately it calls DBCA and I know where are the logs. So my ODA repository has a database in failed status. The problem is that you can’t drop it (it doesn’t exist for DBCA) and you cannot re-create it (it exists for ODA). I’m not a developer, but when I write code I try to manage exceptions. At least they must implement a ‘force’ mode where errors are ignored when deleting something that does not exist.

So if you have the same problem, here is what I did:

  • Open a SR in the hope that they understand there’s something to fix in their code without asking me all log files to upload
  • create a database with same name, directly with DBCA, then drop it with ODACLI

Finally, My Workaround works and Their Oracle Support came with two solutions: create the database with another name or re-image the ODA!

But, when it doesn’t fail, the creation is very fast: from templates with datafiles, and datafiles in those very fast NVMe SSDs.

Create the standby

I don’t like this additional layer, but I have the feeling that it’s better than the ODA repository knows about my databases. The standby database is created with Dbvisit interface (I’m talking about real user friendly interface there, where errors are handled and you even have the possibility to resume a creation that failed). How to make it go to the ODA repository?

I see 3 possibilities.

The odacli has a “–register-database” option to register an already create database. But that does probably too much because it was designed to register databases created on previous ODAs with oakcli.

The odacli has a “–instanceonly” option which is there to register a standby database that will be created later with RMAN duplicate for example. Again this does too much as it creates an instance. I tried it and didn’t have the patience to make it work. When ODACLI encounters a problem, it doesn’t explain what’s wrong, but just show the command line help.

Finally what I did is create a database with ODACLI and the drop it (outside of ODACLI). This is ugly, but its the only way I got something where I understand exactly what is done. This is where I encountered the issue above, so my workflow was actually: create from ODACLI -> fails -> drom from DBCA -> re-create from ODACLI -> success -> drop

I didn’t drop it from DBCA because I wanted to keep the entry in ORATAB. I did it from RMAN:

RMAN> startup force dba mount
RMAN> drop database including backups noprompt;

Then, no problem to create the standby from Dbvisit GUI

Create a filesystem

I’ve created the database directly in ASM. I don’t see any reason to create an ACFS volume for them, especially for Standard Edition where you cannot use ACFS snapshots. It’s just a performance overhead (and with those no-latency disks, any CPU overhead counts) and a risk to remove a datafile as they are exposed in a filesystem with no reason for it.

However, Dbvisit needs a folder where to store the archived logs that are shipped to the standby. I can create a folder in in local filesystem, but I preferred to to create an ACFS filesystem for it.
I did it from ODACLI:


odacli create-dbstorage --dataSize 200 -n DBVISIT -r ACFS

This creates a 200GB filesystem mounted as /u02/app/oracle/oradata/DBVISIT/

Who starts the database?

Dbvisit comes with a scheduler that can start the databases in the required mode. But in ODA the resources are managed by Grid Infrastructure. So after creating the standby database you must modify its mount mode:

srvctl modify database -d MYNEWDB -startoption mount

Don’t forget to change the mount modes after a switchover or failover.

This can be scripted with something like: srvctl modify database -db $db -startoption $(/opt/dbvisit/standby/dbv_oraStartStop status $db| awk '/^Regular Database/{print "OPEN"}/^Standby Database/{print "MOUNT"}')

Keep it simple and test it

ODA is simple if you do what it has been designed for: run the database versions that are certified (currenty 11.2.0.4 and 12.1.0.2) adn don’t try to customize the configuration. Always test the switchover, so that you can rely on the protection. It’s easy with Dbvisit standby, either from GUI of command line. And be sure that your network can keep up with the redo rate. Again, this is easy to check from the GUI. Here is an exemple when testing the migration with Data Pump import:
DbvisitTransferLogSize

From public prices, and before any discount, you can get two ODA X6-2S plus perpetual licences for Oracle Database Standard Edition and Dbvisit standby for less than 90KUSD.
If you need more storage you can double the capacity for about additional 10KUSD for each ODA.
And if you think that ODA may need a DBA sometimes, have a look at our SLAs and you have a reliable and affordable system on your premises to store and process your data.

 

Cet article 2 x ODA X6-2S + Dbvisit Standby: Easy DR in SE est apparu en premier sur Blog dbi services.

Deadlock on two delete statements

Tom Kyte - Fri, 2016-10-21 10:26
Hi Tom, I'm not sure I understand the root cause of the following deadlock trace. Assuming I'm reading it correctly the trace is showing two different sql sessions attempting to delete the same row in the AAA_WF_OPERAND table. However, I do not se...
Categories: DBA Blogs

REGEXP_REPLACE help

Tom Kyte - Fri, 2016-10-21 10:26
Hi! This is (should be...) a trivial question for those who are familiar with Regular Expressions, I guess (and hope). I used them almost 25 years ago, and I remember I was comfortable with them at the time. Weird enough, no matter how hard I am stru...
Categories: DBA Blogs

DDL of composite partition table without subpartition in each partition

Tom Kyte - Fri, 2016-10-21 10:26
Hi Tom, I have one composite partitioned table. When we generate ddl of the table, subpartition details are present in each partition. I know that we can have different high values for subpartition in different partition so subpartition details in...
Categories: DBA Blogs

alert logfile

Tom Kyte - Fri, 2016-10-21 10:26
Hi , I need your help to find alert log text data between a range of dates eg: 10 june 2016 to 11 june 2016 in linux RHEL 6 ,as my alert log file is very big i can't do that manually. Your help will be appreciated ,thanks in advance. regards.
Categories: DBA Blogs

before insert of update on a column ROW tigger

Tom Kyte - Fri, 2016-10-21 10:26
currently i am using REGEXP_SUBSTR function to encrypt cc number in a text column of varchar2. is there a function for oracle 9i that i can use to encrypt the number in any combination to XXX. thank you
Categories: DBA Blogs

Defference

Tom Kyte - Fri, 2016-10-21 10:26
1) What is difference between conventional path load and direct path load? 2) when to use conventional path load? 3) when to use direct path load?
Categories: DBA Blogs

dbms_parallel_execute and 2 packages of mine !

Tom Kyte - Fri, 2016-10-21 10:26
Hi I've 2 PL/SQL packages. - First is dedicated to : * create a parallel task, * then to create chunks (by rowid), * and finally to execute (previous) created task by executing a second PL/SQL Package....
Categories: DBA Blogs

How to check if oracle directory points to the right location?

Tom Kyte - Fri, 2016-10-21 10:26
Hello! Is there a way to check if my directories look at the right folders on my network? ALL_DIRECTORIES view says the directory path is something like /d01/data/xfer/BLA/BLABLA/IN. The actual network location is something like \\xyz14311.bla.c...
Categories: DBA Blogs

Configure easily your Stretch Database

Yann Neuhaus - Fri, 2016-10-21 10:07

In this blog, I will present you the new Stretch Database feature in SQL Server 2016. It couples your SQL Server On-Premises database with an Azure SQL Database, allowing to stretch data from one ore more tables to Azure Cloud.
This mechanism offers to use low-cost hard drives available in Azure, instead of fast and expensive local solid state drives. Indeed SQL Database Server resources are solicited during data transfers and during remote queries (and not SQL Server on-premises).

First, you need to enable the “Remote Data Archive” option at the instance level. To verify if the option is enabled:
USE master
GO
SELECT name, value, value_in_use, description from sys.configurations where name like 'remote data archive'

To enable this option at the instance level:

EXEC sys.sp_configure N'remote data archive', '1';
RECONFIGURE;
GO

Now, you have to link your on-premises database with a remote SQL Database server:
Use AdventureWorks2014;
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'masterPa$$w0rd'
GO
CREATE DATABASE SCOPED CREDENTIAL Stretch_cred
WITH IDENTITY = 'dbi' , SECRET = 'userPa$$w0rd' ;
GO
ALTER DATABASE AdventureWorks2014
SET REMOTE_DATA_ARCHIVE = ON
(
SERVER = 'dbisqldatabase.database.windows.net' ,
CREDENTIAL = Stretch_cred
) ;
GO

The process may take some time as it will create a new SQL Database in Azure, linked to your on-premises database. The credential entered to connect to your SQL Database server is defined in SQL Database. Previously you need to secure the credential by a database master key.

To view all the remote databases from your instance:
Select * from sys.remote_data_archive_databases

Now, if you want to migrate one table from your database ([Purchasing].[PurchaseOrderDetail] in my example), proceed as follows:
ALTER TABLE [Purchasing].[PurchaseOrderDetail] SET ( REMOTE_DATA_ARCHIVE ( MIGRATION_STATE = OUTBOUND) ) ;

Of course repeat this process for each table you want to stretch. You can still access to your data during the migration process.

To view all the remote tables from your instance:
Select * from sys.remote_data_archive_tables

To view the batch process of all the data being migrated: (indeed, you can filtrate by the a specific table)
Select * from sys.dm_db_rda_migration_status

It is also to easily migrate your data back:
ALTER TABLE [Purchasing].[PurchaseOrderDetail] SET ( REMOTE_DATA_ARCHIVE ( MIGRATION_STATE = INBOUND ) ) ;

Moreover, you can select rows to migration by using a filter function. Here is an example:
CREATE FUNCTION dbo.fn_stretchpredicate(@column9 datetime)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS is_eligible
WHERE @column9 > CONVERT(datetime, '1/1/2014', 101)
GO

Then when enable the data migration, specify the filter function:
ALTER TABLE [Purchasing].[PurchaseOrderDetail] SET ( REMOTE_DATA_ARCHIVE = ON (
FILTER_PREDICATE = dbo.fn_stretchpredicate(ModifiedDate),
MIGRATION_STATE = OUTBOUND
) )

Of course in Microsoft world, you can also use a wizard to set up this feature. The choice is up to you!

 

Cet article Configure easily your Stretch Database est apparu en premier sur Blog dbi services.

Rapid analytics

DBMS2 - Fri, 2016-10-21 09:17

“Real-time” technology excites people, and has for decades. Yet the actual, useful technology to meet “real-time” requirements remains immature, especially in cases which call for rapid human decision-making. Here are some notes on that conundrum.

1. I recently posted that “real-time” is getting real. But there are multiple technology challenges involved, including:

  • General streaming. Some of my posts on that subject are linked at the bottom of my August post on Flink.
  • Low-latency ingest of data into structures from which it can be immediately analyzed. That helps drive the (re)integration of operational data stores, analytic data stores, and other analytic support — e.g. via Spark.
  • Business intelligence that can be used quickly enough. This is a major ongoing challenge. My clients at Zoomdata may be thinking about this area more clearly than most, but even they are still in the early stages of providing what users need.
  • Advanced analytics that can be done quickly enough. Answers there may come through developments in anomaly management, but that area is still in its super-early days.
  • Alerting, which has been under-addressed for decades. Perhaps the anomaly management vendors will finally solve it.

2. In early 2011, I coined the phrase investigative analytics, about which I said three main things:

  • It is meant to contrast with “operational analytics”.
  • It is meant to conflate “several disciplines, namely”:
    • Statistics, data mining, machine learning, and/or predictive analytics.
    • The more research-oriented aspects of business intelligence tools.
    • Analogous technologies as applied to non-tabular data types such as text or graph.
  • A simple definition would be “Seeking (previously unknown) patterns in data.”

Generally, that has held up pretty well, although “exploratory” is the more widely used term. But the investigative/operational dichotomy obscures one key fact, which is the central point of this post: There’s a widespread need for very rapid data investigation.

3. This is not just a niche need. There are numerous rapid-investigation use cases in mind, some already mentioned in my recent posts on anomaly management and real-time applications.

  • Network operations. This is my paradigmatic example.
    • Data is zooming all over the place, in many formats and structures, among many kinds of devices. That’s log data, header data and payload data alike. Many kinds of problems can arise …
    • … which operators want to diagnose and correct, in as few minutes as possible.
    • Interfaces commonly include real-time business intelligence, some drilldown, and a lot of command-line options.
    • I’ve written about various specifics, especially in connection with the vendors Splunk and Rocana.
  • Security and anti-fraud. Infosec and cyberfraud, to a considerable extent, are just common problems in network operations. Much of the response is necessarily automated — but the bad guys are always trying to outwit your automation. If you think they may have succeeded, you want to figure that out very, very fast.
  • Consumer promotion and engagement. Consumer marketers feel a great need for speed. Some of it is even genuine. :)
    • If an online promotion is going badly (or particularly well), they can in theory react almost instantly. So they’d like to know almost instantly, perhaps via BI tools with great drilldown.
    • The same is even truer in the case of social media eruptions and the like. Obviously, the tools here are heavily text-oriented.
    • Call centers and even physical stores have some of the same aspects as internet consumer operations.
  • Consumer internet backends, for e-commerce, publishing, gaming or whatever. These cases combine and in some cases integrate the previous three points. For example, if you get a really absurd-looking business result, that could be your first indication of network malfunctions or automated fraud.
  • Industrial technology, such as factory operations, power/gas/water networks, vehicle fleets or oil rigs. Much as in IT networks, these contain a diversity of equipment — each now spewing its own logs — and have multiple possible modes of failure. More often than is the case in IT networks, you can recognize danger signs, then head off failure altogether via preventive maintenance. But when you can’t, it is crucial to identify the causes of failure fast.
  • General IoT (Internet of Things) operation. This covers several of the examples above, as well as cases in which you sell a lot of devices, have them “phone home”, and labor to keep that whole multi-owner network working.
  • National security. If I told you what I meant by this one, I’d have to … [redacted].

4. And then there’s the investment industry, which obviously needs very rapid analysis. When I was a stock analyst, I could be awakened by a phone call and told news that I would need to explain to 1000s of conference call listeners 20 minutes later. This was >30 years ago. The business moves yet faster today.

The investment industry has invested greatly in high-speed supporting technology for decades. That’s how Mike Bloomberg got so rich founding a vertical market tech business. But investment-oriented technology indeed remains a very vertical sector; little of it get more broadly applied.

I think the reason may be that investing is about guesswork, while other use cases call for more definitive answers. In particular:

  • If you’re wrong 49.9% of the time in investing, you might still be a big winner.
  • In high-frequency trading, speed is paramount; you have to be faster than your competitors. In speed/accuracy trade-offs, speed wins.

5. Of course, it’s possible to overstate these requirements. As in all real-time discussions, one needs to think hard about:

  • How much speed is important in meeting users’ needs.
  • How much additional speed, if any, is important in satisfying users’ desires.

But overall, I have little doubt that rapid analytics is a legitimate area for technology advancement and growth.

Categories: Other

Pages

Subscribe to Oracle FAQ aggregator