Skip navigation.

Feed aggregator

OTN Tour of Latin America 2015 : The Journey Begins – CDG Airport

Tim Hall - Sat, 2015-08-01 11:59

ace-directorI’ve been in Charles de Gaulle airport for about three hours now. Only another four to go… :)

I tried to record another technical video, but you can hear kids in the background. Now the timings are sorted, it should be pretty quick to re-record when I get to a hotel, so that’s good I guess. I’m not sure I can face doing another one today.

My YouTube channel is on 199 subscribers. About to ding to the magic 200. :)

Perhaps I should get the GoPro out and do some filming of the barren wasteland, which is the K gates in Terminal 2E.

Cheers

Tim…

OTN Tour of Latin America 2015 : The Journey Begins – CDG Airport was first posted on August 1, 2015 at 6:59 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Combining Spark Streaming and Data Frames for Near-Real Time Log Analysis & Enrichment

Rittman Mead Consulting - Sat, 2015-08-01 08:51

A few months ago I posted an article on the blog around using Apache Spark to analyse activity on our website, using Spark to join the site activity to some reference tables for some one-off analysis. In this article I’ll be taking an initial look at Spark Streaming, a component within the overall Spark platform that allows you to ingest and process data in near real-time whilst keeping the same overall code-based as your batch-style Spark programs.

NewImage

Like regular batch-based Spark programs, Spark Streaming builds on the concept of RDDs (Resilient Distributed Datasets) and provides an additional high-level abstraction called a “discretized stream” or DStream, representing a continuous stream of RDDs over a defined time period. In the example I’m going to create I’ll use Spark Streaming’s DStream feature to hold in-memory the last 24hrs worth of website activity, and use it to update a “Top Ten Pages” Impala table that’ll get updated once a minute.

NewImage

To create the example I started with the Log Analyzer example in the set of DataBricks Spark Reference Applications, and adapted the Spark Streaming / Spark SQL example to work with our CombinedLogFormat log format that contains two additional log elements. In addition, I’ll also join the incoming data stream with some reference data sitting in an Oracle database and then output a parquet-format file to the HDFS filesystem containing the top ten pages over that period.

The bits of the Log Analyzer reference application that we reused comprise of two scripts that compile into a single JAR file; a script that creates a Scala object to parse the incoming CombinedLogFormat log files, and other with the main program in. The log parsing object contains a single function that takes a set of log lines, then returns a Scala class that breaks the log entries down into the individual elements (IP address, endpoint (URL), referrer and so on). Compared to the DataBricks reference application I had to add two extra log file elements to the ApacheAccessLog class (referer and agent), and add some code in to deal with “-“ values that could be in the log for the content size; I also added some extra code to ensure the URLs (endpoints) quoted in the log matched the format used in the data extracted from our WordPress install, which stores all URLs with a trailing forward-slash (“/“).

package com.databricks.apps.logs
case class ApacheAccessLog(ipAddress: String, clientIdentd: String,
 userId: String, dateTime: String, method: String,
 endpoint: String, protocol: String,
 responseCode: Int, contentSize: Long, 
 referer: String, agent: String) {
}
 
object ApacheAccessLog {
val PATTERN = """^(\S+) (\S+) (\S+) \[([\w\d:\/]+\s[+\-]\d{4})\] "(\S+) (\S+) (\S+)" (\d{3}) ([\d\-]+) "([^"]+)" "([^"]+)"""".r
def parseLogLine(log: String): ApacheAccessLog = {
 val res = PATTERN.findFirstMatchIn(log)
 if (res.isEmpty) {
 ApacheAccessLog("", "", "", "","", "", "", 0, 0, "", "")
 }
 else {
 val m = res.get
 val contentSizeSafe : Long = if (m.group(9) == "-") 0 else m.group(9).toLong
 val formattedEndpoint : String = (if (m.group(6).charAt(m.group(6).length-1).toString == "/") m.group(6) else m.group(6).concat("/"))
 
 ApacheAccessLog(m.group(1), m.group(2), m.group(3), m.group(4),
 m.group(5), formattedEndpoint, m.group(7), m.group(8).toInt, contentSizeSafe, m.group(10), m.group(11))
 }
 }
}

The body of the main application script starts by importing Scala classes for Spark, Spark SQL and Spark Streaming, and then defines two variable that determine the amount of log data the application will consider; WINDOW_LENGTH (86400 milliseconds, or 24hrs) which determines the window of log activity that the application will consider, and SLIDE_INTERVAL, set to 60 milliseconds or one minute, which determines how often the statistics are recalculated. Using these values means that our Spark Streaming application will recompute every minute the top ten most popular pages over the last 24 hours.

package com.databricks.apps.logs.chapter1
import com.databricks.apps.logs.ApacheAccessLog
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.SaveMode
import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.streaming.{StreamingContext, Duration}
object LogAnalyzerStreamingSQL {
 val WINDOW_LENGTH = new Duration(86400 * 1000)
 val SLIDE_INTERVAL = new Duration(60 * 1000)

In our Spark Streaming application, we’re also going to load-up reference data from our WordPress site, exported and stored in an Oracle database, to add post title and post author values to the raw log entries that come in via Spark Streaming. In the next part of the script then we define a new Spark context and then a Spark SQL context off-of the base Spark context, then create a Spark SQL data frame to hold the Oracle-sourced WordPress data to later-on join to the incoming DStream data – using Spark’s new Data Frame feature and the Oracle JDBC drivers that I separately download off-of the Oracle website, I can pull in reference data from Oracle or other database sources, or bring it in from a CSV file as I did in the previous Spark example, to supplement my raw incoming log data. 

def main(args: Array[String]) {
 val sparkConf = new SparkConf().setAppName("Log Analyzer Streaming in Scala")
 val sc = new SparkContext(sparkConf)
val sqlContext = new SQLContext(sc)
 import sqlContext.implicits._
 
 val postsDF = sqlContext.load("jdbc", Map(
 "url" -> "jdbc:oracle:thin:blog_refdata/password@bigdatalite.rittmandev.com:1521:orcl",
 "dbtable" -> "BLOG_REFDATA.POST_DETAILS"))
 
 postsDF.registerTempTable("posts")

Note also how Spark SQL lets me declare a data frame (or indeed any RDD with an associated schema) as a Spark SQL table, so that I can later run SQL queries against it – I’ll come back to this at the end).

Now comes the first part of the Spark Streaming code. I start by defining a new Spark Streaming content off of the same base Spark context that I created the Spark SQL one off-of, then I use that Spark Streaming context to create a DStream that reads newly-arrived files landed in an HDFS directory  – for this example I’ll manually copy the log files into an “incoming” HDFS directory, whereas in real-life I’d connect Spark Streaming to Flume using FlumeUtils for a more direct-connection to activity on the webserver. 

val streamingContext = new StreamingContext(sc, SLIDE_INTERVAL)
val logLinesDStream = streamingContext.textFileStream("/user/oracle/rm_logs_incoming")

Then I call the Scala “map” transformation to convert the incoming DStream into an ApacheAccessLog-formatted DStream, and cache this new DStream in-memory. Next and as the final part of this stage, I call the Spark Streaming “window” function which packages the input data into in this case a 24-hour window of data, and creates a new Spark RDD every SLIDE_INTERVAL – in this case 1 minute – of time.

val accessLogsDStream = logLinesDStream.map(ApacheAccessLog.parseLogLine).cache()
val windowDStream = accessLogsDStream.window(WINDOW_LENGTH, SLIDE_INTERVAL)

Now that Spark Streaming is creating RDDs for me to represent all the log activity over my 24 hour period, I can use the .foreachRDD control structure to turn that RDD into its own data frame (using the schema I’ve inherited from the ApacheAccessLog Scala class earlier on), and filter out bot activity and references to internal WordPress pages so that I’m left with actual page accesses to then calculate the top ten list from.

windowDStream.foreachRDD(accessLogs => {
 if (accessLogs.count() == 0) {
 println("No logs received in this time interval")
 } else {
 accessLogs.toDF().registerTempTable("accessLogs")
// Filter out bots 
 val accessLogsFilteredDF = accessLogs
 .filter( r => ! r.agent.matches(".*(spider|robot|bot|slurp|bot|monitis|Baiduspider|AhrefsBot|EasouSpider|HTTrack|Uptime|FeedFetcher|dummy).*"))
 .filter( r => ! r.endpoint.matches(".*(wp-content|wp-admin|wp-includes|favicon.ico|xmlrpc.php|wp-comments-post.php).*")).toDF()
 .registerTempTable("accessLogsFiltered")

Then, I use Spark SQL’s ability to join tables created against the windowed log data and the Oracle reference data I brought in earlier, to create a parquet-formatted file containing the top-ten most popular pages over the past 24 hours. Parquet is the default storage format used by Spark SQL and is suited best to BI-style columnar queries, but I could use Avro, CSV or another file format If I brought the correct library imports in.

val topTenPostsLast24Hour = sqlContext.sql("SELECT p.POST_TITLE, p.POST_AUTHOR, COUNT(*) as total FROM accessLogsFiltered a JOIN posts p ON a.endpoint = p.POST_SLUG GROUP BY p.POST_TITLE, P.POST_AUTHOR ORDER BY total DESC LIMIT 10 ") 
 
 // Persist top ten table for this window to HDFS as parquet file
 
 topTenPostsLast24Hour.save("/user/oracle/rm_logs_batch_output/topTenPostsLast24Hour.parquet", "parquet", SaveMode.Overwrite) 
 }
 })

Finally, the last piece of the code starts-off the data ingestion process and then continues until the process is interrupted or stopped.

streamingContext.start()
    streamingContext.awaitTermination()
  }
}

I can now go over to Hue and move some log files into the HDFS directory that the Spark application is running on, like this:

file_upload

Then, based on the SLIDE_INTERVAL I defined in the main Spark application earlier on (60 seconds, in my case) the Spark Streaming application picks up the new files and processes them, outputting the results as a Parquet file back on the HDFS filesystem (these two screenshots should display as animated GIFs)

spark_processing

So what to do with the top-ten pages parquet file that the Spark Streaming application creates? The most obvious thing to do would be to create an Impala table over it, using the schema metadata embedded into the parquet file, like this:

CREATE EXTERNAL TABLE rm_logs_24hr_top_ten <br />LIKE PARQUET '/user/oracle/rm_logs_batch_output/topTenPostsLast24Hour.parquet/part-r-00001.parquet'<br /> STORED AS PARQUET<br /> LOCATION '/user/oracle/rm_logs_batch_output/topTenPostsLast24Hour.parquet';

Then I can query the table using Hue again, or I can import the Impala table metadata into OBIEE and analyse it using Answers and Dashboards.

NewImage

So that’s a very basic example of Spark Streaming, and I’ll be building on this example over the new few weeks to add features such as persistent storing of all processed data, and classification and clustering the data using Spark MLlib. More importantly, copying files into HDFS for ingestion into Spark Streaming adds quite a lot of latency and it’d be better to connect Spark directly to the webserver using Flume or even better, Kafka – I’ll add examples showing these features in the next few posts in this series.

Categories: BI & Warehousing

RMAN -- 6 : RETENTION POLICY and CONTROL_FILE_RECORD_KEEP_TIME

Hemant K Chitale - Sat, 2015-08-01 08:12
Most people read the documentation on CONTROL_FILE_RECORD_KEEP_TIME and believe that this parameter *guarantees* that Oracle will retain backup records for that long.  (Some do understand that backup records may be retained longer, depending on the availability of slots (or "records") for the various types of metadata in the controlfile).

However, .... as you should know from real-world experience ... there is always a "BUT".

Please read Oracle Support Note "How to ensure that backup metadata is retained in the controlfile when setting a retention policy and an RMAN catalog is NOT used. (Doc ID 461125.1)" and Bug 6458068

Oracle may need to "grow" the controlfile when adding information about ArchiveLogs or BackupSets / BackupPieces.
An example is this set of entries that occurred when I had created very many archivelogs and backuppieces for them :
Trying to expand controlfile section 13 for Oracle Managed Files
Expanded controlfile section 13 from 200 to 400 records
Requested to grow by 200 records; added 9 blocks of records


To understand the contents of the controlfile see how this listing shows that I have space for 400 records of Backup Pieces and am currently using 232 records.  :

SQL> select * from v$controlfile_record_section where type like '%BACKUP%' order by 1;

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID
---------------------------- ----------- ------------- ------------ ----------- ---------- ----------
BACKUP CORRUPTION 44 371 0 0 0 0
BACKUP DATAFILE 200 245 159 1 159 159
BACKUP PIECE 736 400 232 230 61 261
BACKUP REDOLOG 76 215 173 1 173 173
BACKUP SET 40 409 249 1 249 249
BACKUP SPFILE 124 131 36 1 36 36

6 rows selected.

SQL>


However, if I start creating new Backup Pieces without deleting older ones (without Oracle auto-deleting older ones) and Oracle hits the allocation of 400 records, it may try to add new records.  Oracle then prints a message (as shown above) into the alert.log.  Oracle may overwrite records older than control_file_record_keep_time.  If necesssary, it tries to expand the controlfile. If, however, there is not enough filesystem space (or space in the raw device or ASM DiskGroup) to expand the controlfile, it may have to ovewrite some records from the controlfile.  If it has to overwrite records that are older than control_file_record_keep_time, it provides no warning.  However, if it has to overwrite records that are not older than the control_file_record_keep_time, it *does* write a warning to the alert.log

I don't want to violate the Oracle Support policy and quote from the Note and the Bug but I urge you to read both very carefully.  The Note has a particular line about whether there is a relationship between the setting of the control_file_record_time and the Retention Policy.  In the Bug, there is one particularly line about whether the algorithm to extend / reuse / purge records in the controlfile is or is not related to the Retention Policy.  So it IS important to ensure that you have enough space for the controlfile to grow in case it needs to expand space for these records.

Also, remember that not all Retention Policies are defined in terms of days.  Some may be defined in terms of REDUNDANCY (the *number* of Full / L0 backups that are not to be obsoleted).  This does NOT relate to the number of days because Oracle can't predict how many backups you run in a day / in a week / in a month.  Take an organisation with a small database and runs 3 Full / L0 backups per day versus another with a very large database that runs Full / L0 backup only once a fortnight !  How many days of Full / L0 backups would each have to retain if the REDUNDANCY is set to, say, 3 ?

.
.
.




Categories: DBA Blogs

OTN Tour of Latin America 2015 : The Journey Begins

Tim Hall - Sat, 2015-08-01 06:45

ace-directorI’m about to board a flight to Paris, where I will wait for 7 hours before starting my 14 hour flight to Montevideo, Uruguay. I think you can probably guess how I’m feeling at this moment…

Why won’t someone hurry up and invent a teleport device?

I will probably put out little posts like this along the way, just so friends and family know what is going on. It’s wrong to wish your life away, but I’m really not looking forward to the next 20+ hours…

Hopefully I will get power in Paris, so I can do some stuff on my laptop…

Cheers

Tim…

OTN Tour of Latin America 2015 : The Journey Begins was first posted on August 1, 2015 at 1:45 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

FASTSYNC Redo Transport for Data Guard in #Oracle 12c

The Oracle Instructor - Sat, 2015-08-01 04:11

FASTSYNC is a new LogXptMode for Data Guard in 12c. It enables Maximum Availability protection mode at larger distances with less performance impact than LogXptMode SYNC has had before. The old SYNC behavior looks like this:

LogXptMode=SYNC

LogXptMode=SYNC

The point is that we need to wait for two acknowledgements by RFS (got it & wrote it) before we can write the redo entry locally and get the transaction committed. This may slow down the speed of transactions on the Primary, especially with long distances. Now to the new feature:

LogXptMode=FASTSYNC

LogXptMode=FASTSYNC

Here, we wait only for the first acknowledgement (got it) by RFS before we can write locally. There is still a possible performance impact with large distances here, but it is less than before. This is how it looks implemented:

DGMGRL> show configuration;   

Configuration - myconf

  Protection Mode: MaxAvailability
  Members:
  prima - Primary database
    physt - (*) Physical standby database 

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 26 seconds ago)

DGMGRL> show database physt logxptmode
  LogXptMode = 'fastsync'
DGMGRL> exit
[oracle@uhesse ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sat Aug 1 10:41:27 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show parameter log_archive_dest_2

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="physt", SYNC NOAFFIRM
						  delay=0 optional compression=
						 disable max_failure=0 max_conn
						 ections=1 reopen=300 db_unique
						 _name="physt" net_timeout=30,
						 valid_for=(online_logfile,all_
						 roles)

My configuration uses Fast-Start Failover, just to show that this is no restriction. Possible but not required is the usage of FASTSYNC together with Far Sync Instances. You can’t have Maximum Protection with FASTSYNC, though:

DGMGRL> disable fast_start failover;
Disabled.
DGMGRL> edit configuration set protection mode as maxprotection;
Error: ORA-16627: operation disallowed since no standby databases would remain to support protection mode

Failed.
DGMGRL> edit database physt set property logxptmode=sync;
Property "logxptmode" updated
DGMGRL> edit configuration set protection mode as maxprotection;
Succeeded.

Addendum: As my dear colleague Joel Goodman pointed out, the name of the process that does the Redo Transport from Primary to Standby has changed from LNS to NSS (for synchronous Redo Transport):

SQL> select name,description from v$bgprocess where paddr<>'00';

NAME  DESCRIPTION
----- ----------------------------------------------------------------
PMON  process cleanup
VKTM  Virtual Keeper of TiMe process
GEN0  generic0
DIAG  diagnosibility process
DBRM  DataBase Resource Manager
VKRM  Virtual sKeduler for Resource Manager
PSP0  process spawner 0
DIA0  diagnosibility process 0
MMAN  Memory Manager
DBW0  db writer process 0
MRP0  Managed Standby Recovery
TMON  Transport Monitor
ARC0  Archival Process 0
ARC1  Archival Process 1
ARC2  Archival Process 2
ARC3  Archival Process 3
ARC4  Archival Process 4
NSS2  Redo transport NSS2
LGWR  Redo etc.
CKPT  checkpoint
RVWR  Recovery Writer
SMON  System Monitor Process
SMCO  Space Manager Process
RECO  distributed recovery
LREG  Listener Registration
CJQ0  Job Queue Coordinator
PXMN  PX Monitor
AQPC  AQ Process Coord
DMON  DG Broker Monitor Process
RSM0  Data Guard Broker Resource Guard Process 0
NSV1  Data Guard Broker NetSlave Process 1
INSV  Data Guard Broker INstance SlaVe Process
FSFP  Data Guard Broker FSFO Pinger
MMON  Manageability Monitor Process
MMNL  Manageability Monitor Process 2

35 rows selected.

I’m not quite sure, but I think that was even in 11gR2 already the case. Just kept the old name in sketches as a habit :-)


Tagged: 12c New Features, Data Guard
Categories: DBA Blogs

Internet of Things (IoT) - What is your plan?

Peeyush Tugnawat - Fri, 2015-07-31 21:42

Proliferation of connected devices and ever-growing data is driving some very interesting Internet of Things (IoT) use cases and challenges.

Check out some very interesting facts. What is your plan?

Click on the interactive image below...

What is your Plan? 

The Internet of Things - Managing the Complexity in Cloud

Peeyush Tugnawat - Fri, 2015-07-31 21:38

By 2020 there will be 50 Billion connected devices in the world generating more data than every thought possible.

Watch this video to understand how IoT is changing the world and your business models. What is your plan?

Check out Oracle's IoT offering in Cloud - IoT Cloud

Blackboard’s Messaging Problems

Michael Feldstein - Fri, 2015-07-31 15:07

By Michael FeldsteinMore Posts (1038)

There are a lot of things that are hard to evaluate from the outside when gauging how a company is doing under new management in the midst of a turnaround with big new products coming out. For example, how good is Ultra, Blackboard’s new user experience? (At least, I think the user experience is what they mean by “Ultra.” Most of the time.) We can look at it from the outside and play around with it for a bit, but the best way to judge it is to talk to a lot of folks who have spent time living with it and delivering courses in it. There aren’t that many of those at the moment. Blackboard has offered to put us in touch with some of them, and we will let you know what we learn from them after we talk to them. How likely is Blackboard to deliver the promised functionality on their Ultra to-do list to other customers on schedule (or at all)? Since this is a big initiative and the company doesn’t have much of a track record, it’s hard to tell in advance of them actually releasing software. We’ll watch and report on it as it comes out. How committed is Blackboard to self-hosted customers on the current platform? We have their word, and logical reasons why we believe they mean it when they say they want to support those customers, but we have to talk to a bunch of customers to find out what they think of the support that they are getting, and even then, we only know about Blackboard’s current execution, which is not the same as their future commitment. So there are a lot of critical aspects about the company that are just hard and time-consuming to evaluate and will have to wait on more data.

But not everything is hard to evaluate. Communication, for example, is pretty easy to judge. Last year I mocked Jay Bhatt pretty soundly for his keynote. (Of course, we have hit D2L a lot harder for their communication issues because theirs have been a lot worse.) In some ways, it is so easy to critique communication that we have to be careful not to just take cheap shots. Everybody loves to mock vendors in general and LMS vendors in particular. We’re mainly interested in communications problems that genuinely threaten to hurt their relationship with their customers. Blackboard does have serious customer communication problems at the moment, and they do matter. I’m going to hit on a few of them.

Keynote Hits Sour Notes

Since I critiqued last year’s keynote, an update in that department is as good a place to start as any. It’s sort of emblematic of the problem. This year’s keynote was better than last year’s but that doesn’t mean it was good. Of the two-hour presentation, only the last twenty minutes or so directly addressed the software. The rest was about values and process. I get why the company is doing this. As I said in last year’s review, they are nothing if not earnest. So, for example, when Jay Bhatt says that we need to start a “revolution” in education and that Blackboard is inviting “you”—presumably the educators in the room—to join them, it doesn’t carry the sinister tone of the slick Sillycon Valley startup CEO talking about “disrupting” education (by which they generally mean replacing dumb, mean, unionized bad people teachers with slick, nice, happy-faced software). Jay comes across as a dad and a former teacher who honestly cares about education and wants very much to do his part to improve it. But his pitch is tone deaf. No matter how earnest you are, you can’t take center stage as the CEO of a software company that has a long and infamous reputation for disregarding customers making education worse rather than better and then, giant-face projected on the jumbotron and simulcast on the web, convince people that you are just a dad who wants to make education better. It doesn’t work. It’s not going to win over skeptical customers, never mind skeptical prospective customers. No matter how much you sincerely mean it. No matter how much it is said with the best of intentions. You also can’t spend the first 90+ minutes of the keynote talking about process and then get around to admitting that your revolutionary software is a year late. Phil and I both give Jay and Blackboard tons of credit for being forthright about the delay in the keynote, and for generally showing a kind of honesty and openness that we don’t see very often from big ed tech vendors. Really, it’s rare, it’s important, and it deserves more credit that it will probably be given by a lot of people. But in terms of having the intended effect on the audience, owning up to your delivery problems in the last 10 minutes of a two-hour keynote, most of which was also not spent talking about the stuff that customers most immediately care about, will not have the desired effect. The reason Blackboard went though that first 90 minutes is that they, really, really want to tell you, with all their hearts, that “Gee whiz, gang, we really do care and we really are trying super-hard to create something that will make students’ lives better.” But if the punchline, after 90+ minutes, is “…and…uh…we know we told you we’d have it done a year ago, but honestly, we mean it, we’re still working on it,” you will not win converts.

The one thing I did like very much, besides the honesty about missing their delivery dates, was the day-in-the-life walk-throughs of the software. They very compactly and effectively conveyed the quality of thought and concern for the student that the first 90 minutes of process talk did not. If you want to convince me that you really care about students, then don’t talk to me about how much you really care about the students. Show me what you have learned from them. Because talk is cheap. I won’t believe that you really care about students in a way that affects what you do in your business until you show me that you have developed a clear and actionable understanding of what students need and want and care about. That is what the walk-throughs accomplished (although they would have been even more effective with just a smidge less “golly gee” enthusiasm).

There’s one simple thing Blackboard could do that would vastly improve their keynotes and make a host of rhetorical sins more forgivable. They could bring back Ray Henderson’s annual report card. Every year, Ray would start the keynote by laying out last year’s publicly declared goals, providing evidence of progress (or not) toward those goals—quantitative evidence, whenever possible—and setting the goals for the new year. This set the right tone for the whole conference. “I made you some promises. Here’s how I did on those promises. Here’s what I’m going to do better this year. And here are some new promises.” As a customer, I will hear whatever else you have to say to me next much more charitably if you do that first. For example, Phil and I have heard a number of customers express dissatisfaction with the length of time it takes to fix bugs. At a time when Blackboard is trying to convince self-hosted customers that they will not be abandoned, this is particularly important not to let get out-of-hand because every customer who has an unaddressed bug will be tempted to read it as evidence that the company is secretly abandoning 9.1 and just lying about it. But if Blackboard leadership got up on stage—as they used to—and said, “Here’s the number of new bugs we had in the past year, here’s average length of time that P1s go unaddressed, here’s the trend line on it, here’s our explanation of why that trend line is what it is, and here’s our promise that we will give you an update on this next year, even if it looks bad for us,” then customers are going to be much more likely to give the company the benefit of the doubt. If you’ve addressed my concerns as a customer and said your “mea culpas” first, then I’m going to be more inclined to believe that anything else you want to tell me is truthful and meant for my benefit.

What Is Ultra and What Does It Mean For Me?

Ultra Man

Another problem Blackboard has is that it is very hard to understand what they mean by “Ultra.” Sometimes they mean an architecture that enables a new user experience. Sometimes they mean the new user experience that may or may not require the architecture. And at no time do they fully clarify what it means for hosting.

Here’s a webinar from last December that provides a pretty representative picture of what Blackboard’s Ultra talk is like:

Most of the Ultra talk is about the user experience. So it makes sense to infer that Ultra is a new user experience which, for those with any significant experience with Blackboard or many of the other LMS providers, would suggest a new skin (or “lipstick on a pig,” as Phil recently put it). And yet, Ultra doesn’t run on the self-hosted version of Blackboard. Why is that? A cynical person would say (and cynical customers have said) that Blackboard is just trying to push people off of self-hosting. No, says Blackboard, not at all. Actually, the reason we can’t do self-hosted Ultra is because Ultra requires the new cloud architecture, which you can’t self-host.

Except for Ultra on mobile. You can experience Ultra on mobile today, even if you are running self-hosted 9.1.

Huh?

OK, so if I want to run Ultra, I can’t run it self-hosted (except for mobile, which is fine). What if I’m managed hosted? Here’s the slide from that webinar:

hosting_options_from_youtube

There you go. Clear as mud. What is “Premium SaaS”? Is it managed hosting? Is it private cloud? What does it mean for current managed hosting customers? What we have found is that there doesn’t seem to be complete shared understanding even among the Blackboard management team about what the answers to these questions are. Based on what Phil and I have been able to glean about the true state of affairs, here’s how I would explain the situation if I were a Blackboard executive:

  • Ultra is Blackboard’s new product philosophy and user interface. Rather than just sticking in a new tab or drop-down menu and a new bill from a new sales team every time we add new capabilities, we’re trying to design these capabilities into the core product experience in ways that fit with how customers would naturally use them. So rather than thinking about separate products living in separate places—like Collaborate, Community, Analytics, and Content, for example—you can think about synchronous collaboration, non-course groups, student progress tracking, and content sharing naturally when and where you need those capabilities in your daily academic life.

  • Blackboard Learn Cloud [Note: This is my made-up name, not Blackboard’s official product name] is the new architecture that makes Ultra possible for Learn. It also enables you to gain all of the benefits of being in the cloud, like being super-reliable and less expensive. But with regard to Ultra, we can’t create that nifty integrated experience without adding some new technical infrastructure. Learn Cloud enables us to do that. Update: Ultra is still a work in progress and may not be appropriate for all professors and all courses in its current form. Luckily, Learn Cloud also runs the traditional Learn user experience that is available on Learn Enterprise. So you can run Learn Cloud now without impacting your faculty and have them switch over to the Ultra experience—on the same platform—whenever they are ready for it and it is ready for them.

  • Blackboard Learn Enterprise [another Feldstein-invented name] is the classic architecture for Learn, currently on version 9.1. We think that a significant number of customers, both in the US and abroad, will continue to want to use the current architecture for a long time to come, in part because they want or need to self-host. We are committed to actively developing Learn Enterprise for as long as a significant number of customers want to use it. Our published road maps go out two years, but that doesn’t mean we only plan to develop it for another two years. It just means that it’s silly to create technology road maps that are longer than two years, given how much technology changes. Because Learn Enterprise shares a lot of code with Learn Cloud, we actually can afford to continue supporting both as long as customers are buying both in numbers. So we really do mean it when we say plan to keep supporting Enterprise for the foreseeable future. We will also bring as much of the Ultra experience to Enterprise as the technology allows. That won’t be all or most, but it will be some. The product will continue moving forward and continue to benefit from our best thinking.

  • Self-hosted Learn Cloud isn’t going to happen any time soon, which means that self-hosted Ultra isn’t going to happen any time soon. It is possible that the technologies that we are using for Blackboard Cloud will mature enough in the future that we will be able to provide you with a self-hosted version that we feel confident that we can support. (This is a good example of why it is silly to create technology road maps that are more than two years long. Who knows what the Wizards of the Cloud will accomplish in two years?) But don’t hold your breath. For now and the foreseeable future, if you self-host, you will use Learn Enterprise, and we will keep supporting and actively developing it for you.

  • Mobile is a special case because a lot of the functionality of the mobile app has lived in the cloud from Day 1 (unlike Learn Enterprise). So we can deliver the Ultra experience to your mobile apps even if you are running Learn Enterprise at home.

  • Managed hosting customers cannot run Ultra on Learn for the same reason that self-hosted customers cannot: They are currently using Learn Enterprise. They can continue to use Learn Enterprise on managed hosting for as long as they want, as long as they don’t need Ultra. We will, eventually, offer Learn Private Cloud [yet another Feldstein-invented name]. Just as it sounds, this will be a private, Blackboard-hosted instance of Blackboard Cloud. Managed Hosted clients are welcome to switch to Learn Private Cloud when it becomes available, but it is not the same as managed hosting and may or may not meet your needs as well as other options. Please be sure to discuss it with your representative when it becomes available. In the meantime, we’ll provide you with detailed information about what would change if you moved from managed hosting of Blackboard Enterprise to Blackboard Cloud, along with detailed information about what the migration process would be like.

To be clear, I’m not 100% certain that what I’ve described above is factually correct, in part because Phil and I have heard slightly different versions of the story from different Blackboard executives. (I’m fairly sure it’s at least mostly right.) The main point is that, whatever the truth is, Blackboard needs to lay it out more clearly. Right now, they are missing easy wins because they are not communicating well.

Time will tell whether Ultra pays off. I’m actually pretty impressed with what I’ve seen so far. But no matter how good it turns out to be, Blackboard won’t start winning RFPs in real numbers until they start telling their story better.

The post Blackboard’s Messaging Problems appeared first on e-Literate.

Oracle Mobile Cloud Service First Hands-On Experience

Andrejus Baranovski - Fri, 2015-07-31 12:50
Thanks to SOA Community and Jurgen Kress, I had a chance to play with Oracle MCS (Mobile Cloud Service). This new Oracle product is promoted with full force by Oracle PM team, there is dedicated Youtube channel with videos to watch and learn - Oracle Mobile Platform. Mobile Cloud Service offers mobile enterprise repository to organize and support your mobile development. Mobile backend services, security, connectors, storage and etc. can be defined and managed in MCS. Web Services published in MCS can be monitored to track performance and errors. All this should simplify mobile solutions implementation.

This was my first encounter with MCS and I would like to describe the test I did. MCS UI is implemented with Oracle internal JS framework following Alta UI standard. There are options to monitor and administer MCS instance. I'm more interested in development options:


I will not go through all available options, but only focus on Mobile Backend. Basically we can define a group, where we could include various reusable business logic artefacts (API's). Mainly this will be different Web Service calls. The same Web Service calls can be reused by mobile application developer.

In Mobile Backend section we can edit existing groups or create a new one:


You should think about Mobile Backend as about a group of reusable code artefacts (API's). There is an option to create new API or reuse existing one. I decided to reuse existing API for Incidents registration:


This API implements REST Web Service call to register new incident, also it allows to query information about previously reported incidents. This can be tested directly in MCS environment, we could define sample payload data and simulate Web Service call to register new incident:


Web Service call is successful, we can observe this from the log - new incident is registered and ID is assigned. Same Web Service will be reused from mobile application. With MCS we could monitor Web Service usage, number of invocations, errors, etc. - this makes it easier to manage entire infrastructure for mobile solutions:


To make sure new incident was successfully registered, I could run another REST call for the same Web Service - to get incident information about ID:


Result shows incident data, this means incident was located successfully:


Incidents registration service is registered in API's group, we could edit and test this Web Service online in MCS:


Red Samurai mobile backend service is live - invocation statistics and processing time metrics are aggregated by MCS:

Log Buffer #434: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-07-31 09:55

This Log Buffer Edition throws spotlight on some of the salient blog posts from Oracle, SQL Server and MySQL.

Oracle:

  • STANDARD date considerations in Oracle SQL and PL/SQL
  • My good friend, Oracle icon Karen Morton passed away.
  • Multiple invisible indexes on the same column in #Oracle 12c
  • Little things worth knowing: Data Guard Broker Setup changes in 12c
  • Things that are there but you cannot use

SQL Server:

  • Dynamic Grouping in SSRS Reports
  • SQL 2014 Clustered Columnstore index rebuild and maintenance considerations
  • SQL Server 2016 CTP2
  • Azure SQL Database Security Features
  • Visualize the timeline of your SQL jobs using Google graph and email

MySQL:

  • Shinguz: Max_used_connections per user/account
  • Generally in MySQL we send queries massaged to a point where optimizer doesn’t have to think about anything.
  • Replication is the process that transfers data from an active master to a slave server, which reproduces the data stream to achieve, as best as possible, a faithful copy of the data in the master.
  • Unknown column ‘smth’ in ‘field list’ -> Oldie but goodie error
  • Why base64-output=DECODE-ROWS does not print row events in MySQL binary logs

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL.

The post Log Buffer #434: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Less Performance Impact with Unified Auditing in #Oracle 12c

The Oracle Instructor - Fri, 2015-07-31 04:56

There is a new auditing architecture in place with Oracle Database 12c, called Unified Auditing. Why would you want to use it? Because it has significantly less performance impact than the old approach. We buffer now audit records in the SGA and write them asynchronously to disk, that’s the trick.

Other benefits of the new approach are that we have now one centralized way (and one syntax also) to deal with all the various auditing features that have been introduced over time, like Fine Grained Auditing etc. But the key improvement in my opinion is the reduced performance impact, because that was often hurting customers in the past. Let’s see it in action! First, I will record a baseline without any auditing:

 

[oracle@uhesse ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jul 31 08:54:32 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select value from v$option where parameter='Unified Auditing';

VALUE
----------------------------------------------------------------
FALSE

SQL> @audit_baseline
Connected.

Table truncated.


Noaudit succeeded.


PL/SQL procedure successfully completed.

Connected.

PL/SQL procedure successfully completed.

Elapsed: 00:00:06.07
Connected.

PL/SQL procedure successfully completed.
SQL> host cat audit_baseline.sql
connect / as sysdba
truncate table aud$;
noaudit select on adam.sales;
exec dbms_workload_repository.create_snapshot

connect adam/adam
set timing on
declare v_product adam.sales.product%type;
begin
for i in 1..100000 loop
select product into v_product from adam.sales where id=i;
end loop;
end;
/
set timing off

connect / as sysdba
exec dbms_workload_repository.create_snapshot

So that is just 100k SELECT against a 600M MB table with an index on ID without auditing so far. Key sections of the AWR report for the baseline:

unified_auditing1unified_auditing2

The most resource consuming SQL in that period was the AWR snapshot itself. Now let’s see how the old way to audit impacts performance here:

SQL>  show parameter audit_trail

NAME_COL_PLUS_SHOW_PARAM                 TYPE        VALUE_COL_PLUS_SHOW_PARAM
---------------------------------------- ----------- ----------------------------------------
audit_trail                              string      DB, EXTENDED
SQL> @oldaudit
Connected.

Table truncated.


Audit succeeded.


PL/SQL procedure successfully completed.

Connected.

PL/SQL procedure successfully completed.

Elapsed: 00:00:56.42
Connected.

PL/SQL procedure successfully completed.
SQL> host cat oldaudit.sql
connect / as sysdba
truncate table aud$;
audit select on adam.sales by access;
exec dbms_workload_repository.create_snapshot

connect adam/adam
set timing on
declare v_product adam.sales.product%type;
begin
for i in 1..100000 loop
select product into v_product from adam.sales where id=i;
end loop;
end;
/
set timing off

connect / as sysdba
exec dbms_workload_repository.create_snapshot

That was almost 10 times slower! The AWR report confirms that and shows why it is so much slower now:

unified_auditing3unified_auditing4

It’s because of the 100k inserts into the audit trail, done synchronously to the SELECTs. The audit trail is showing them here:

 

SQL> select sql_text,sql_bind from dba_audit_trail where rownum<=10; 
SQL_TEXT                                           SQL_BIND 
-------------------------------------------------- ---------- 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):1 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):2 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):3 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):4 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):5 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):6 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):7 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):8 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):9 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(2):10 
10 rows selected. 
SQL> select count(*) from dba_audit_trail where sql_text like '%SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1%';

  COUNT(*)
----------
    100000

Now I will turn on Unified Auditing – that requires a relinking of the software while the database is down. Afterwards:

SQL> select value from v$option where parameter='Unified Auditing';

VALUE
----------------------------------------------------------------
TRUE

SQL> @newaudit
Connected.

Audit policy created.


Audit succeeded.


PL/SQL procedure successfully completed.

Connected.

PL/SQL procedure successfully completed.

Elapsed: 00:00:11.90
Connected.

PL/SQL procedure successfully completed.
SQL> host cat newaudit.sql
connect / as sysdba
create audit policy audsales actions select on adam.sales;
audit policy audsales;
exec dbms_workload_repository.create_snapshot

connect adam/adam
set timing on
declare v_product adam.sales.product%type;
begin
for i in 1..100000 loop
select product into v_product from adam.sales where id=i;
end loop;
end;
/
set timing off

connect / as sysdba
exec dbms_workload_repository.create_snapshot

That was still slower than the baseline, but much better than with the old method! Let’s see the AWR report for the last run:

unified_auditing5

unified_auditing6

Similar to the first (baseline) run, the snapshot is the most resource consuming SQL during the period. DB time as well as elapsed time are shorter by far than with the old audit architecture. The 100k SELECTs together with the bind variables have been captured here as well:

SQL> select sql_text,sql_binds from unified_audit_trail where rownum<=10; 
SQL_TEXT                                                     SQL_BINDS 
------------------------------------------------------------ ---------- 
ALTER DATABASE OPEN 
create audit policy audsales actions select on adam.sales 
audit policy audsales 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):1 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):2 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):3 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):4 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):5 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):6 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):7 
10 rows selected. 
SQL> select count(*) from unified_audit_trail where sql_text like '%SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1%';

  COUNT(*)
----------
    100000

The first three lines above show that sys operations are also recorded in the same (Unified!) Audit Trail, by the way. There is much more to say and to learn about Unified Auditing of course, but this may give you a kind of motivation to evaluate it, especially if you have had performance issues in the past related to auditing. As always: Don’t believe it, test it! :-)


Tagged: 12c New Features, Performance Tuning, security
Categories: DBA Blogs

Oracle Cloud - Modern & Flexible Cloud for Modern Business

Peeyush Tugnawat - Fri, 2015-07-31 01:07

Oracle offers the most comprehensive portfolio of cloud computing solutions in the industry today. Whatever your cloud needs, Oracle has the complete solution for you.

Discoverer and Windows 10

Michael Armstrong-Smith - Thu, 2015-07-30 22:33
Hi everyone
Has anyone had the courage to upgrade to Windows 10 and see if Discoverer Plus still works?

How about the Discoverer server? Anyone tried that.

If you have drop me a reply

Michael

August 6, 2015: Oracle ERP Cloud Customer Forum―The Rancon Group

Linda Fishman Hoyle - Thu, 2015-07-30 17:57

Join us for another Oracle Customer Reference Forum on August 6, 2015, at 9:00 a.m. PT to hear Steven Van Houten, CFO at The Rancon Group. The company is a leader in Southern California community development, commercial building, and land use.

During this Customer Forum call, Van Houten will share with you The Rancon Group’s lessons learned during its implementation and the benefits it is receiving by using Oracle ERP Cloud. He will explain how Oracle ERP Cloud helps The Rancon Group make intelligent decisions, get information out to its mobile workforce, and meet its needs now and in the future.

Register now to attend the live Forum on Thursday, August 6, 2015, at 09:00 a.m. Pacific Time / 12:00 p.m Eastern Time.

CVSS Version 3.0 Announced

Oracle Security Team - Thu, 2015-07-30 16:04

Hello, this is Darius Wiles.

Version 3.0 of the Common Vulnerability Scoring System (CVSS) has been announced by the Forum of Incident Response and Security Teams (FIRST). Although there have been no high-level changes to the standard since the Preview 2 release which I discussed in a previous blog post, there have been a lot of improvements to the documentation.

Soon, Oracle will be using CVSS v3.0 to report CVSS Base scores in its security advisories. In order to facilitate this transition, Oracle plans to release two sets of risk matrices, both CVSS v2 and v3.0, in the first Critical Patch Update (Oracle’s security advisories) to provide CVSS version 3.0 Base scores. Subsequent Critical Patch Updates will only list CVSS version 3.0 scores.

While Oracle expects most vulnerabilities to have similar v2 and v3.0 Base Scores, certain types of vulnerabilities will experience a greater scoring difference. The CVSS v3.0 documentation includes a list of examples of public vulnerabilities scored using both v2 and v3.0, and this gives an insight into these scoring differences. Let’s now look at a couple of reasons for these differences.

The v3.0 standard provides a more precise assessment of risk because it considers more factors than the v2 standard. For example, the important impact of most cross-site scripting (XSS) vulnerabilities is that a victim's browser runs malicious code. v2 does not have a way to capture the change in impact from the vulnerable web server to the impacted browser; basically v2 just considers the impact to the former. In v3.0, the Scope metric allows us to score the impact to the browser, which in v3.0 terminology is the impacted component. v2 scores XSS as "no impact to confidentiality or availability, and partial impact to integrity", but in v3.0 we are free to score impacts to better fit each vulnerability. For example, a typical XSS vulnerability, CVE-2013-1937 is scored with a v2 Base Score of 4.3 and a v3.0 Base Score of 6.1. Most XSS vulnerabilities will experience a similar CVSS Base Score increase.

Until now, Oracle has used a proprietary Partial+ metric value for v2 impacts when a vulnerability "affects a wide range of resources, e.g., all database tables, or compromises an entire application or subsystem". We felt this extra information was useful because v2 always scores vulnerabilities relative to the "target host", but in cases where a host's main purpose is to run a single application, Oracle felt that a total compromise of that application warrants more than Partial. In v3.0, impacts are scored relative to the vulnerable component (assuming no scope change), so a total compromise of an application now leads to High impacts. Therefore, most Oracle vulnerabilities scored with Partial+ impacts under v2 are likely to be rated with High impacts and therefore more precise v3.0 Base scores. For example, CVE-2015-1098 has a v2 Base score of 6.8 and a v3.0 Base score of 7.8. This is a good indication of the differences we are likely to see. Refer to the CVSS v3.0 list of examples for more details on score this vulnerability.

Overall, Oracle expects v3.0 Base scores to be higher than v2, but bear in mind that v2 scores are always relative to the "target host", whereas v3.0 scores are relative to the vulnerable component, or the impacted component if there is a scope change. In other words, CVSS v3.0 will provide a better indication of the relative severity of vulnerabilities because it better reflects the true impact of the vulnerability being rated in software components such as database servers or middleware.


For More Information

The CVSS v3.0 documents are located on FIRST's web site at http://www.first.org/cvss/

Oracle's use of CVSS [version 2], including a fuller explanation of Partial+ is located at http://www.oracle.com/technetwork/topics/security/cvssscoringsystem-091884.html

My previous blog post on CVSS v3.0 preview is located at https://blogs.oracle.com/security/entry/cvss_version_3_0_preview

Eric Maurice's blog post on Oracle's use of CVSS v2 is located at https://blogs.oracle.com/security/entry/understanding_the_common_vulne_2

Oracle Priority Support Infogram for 30-JUL-2015

Oracle Infogram - Thu, 2015-07-30 13:09

Open World
Oracle OpenWorld 2015 - Registrations Open, from Business Analytics - Proactive Support.
Oracle Support
Top 5 Ways to Personalize My Oracle Support, from the My Oracle Support blog.
RDBMS
A set of three updates from Upgrade your Database - NOW! in this issue:
ORAchk - How to log SRs and ERs for ORAchk
Things to consider BEFORE upgrading to Oracle 12.1.0.2 to AVOID poor performance and wrong results
Optimizer Issue in Oracle 12.0.1.2: "Reduce Group By"
PeopleSoft/SES
Upgrade your SES Database From 11.2.0.3 to 11.2.0.4 for the PeopleSoft Search Framework, from the PeopleSoft Technology Blog.
Java
JShell and REPL in Java 9, from The Java Source.
Modifying the run configuration for the JUnit test runner, from Andreas Fester's Blog.
MySQL
Learn About Queries, Stored Routines, and More MySQL Developer Skills, from Oracle's MySQL Blog.
Fusion Applications
Careful Use of Aggregate Functions, from the Fusion Applications Developer Relationsblog.
ADF
ADF 11.1.1.9 Goodies – Conveyor Belt Component and Alta UI, from WebLogic Partner Community EMEA.
And from the same source:
Create and set clientAttribute to ADF Faces component programmatically to pass value on client side JavaScript
Solaris
Docker coming to Oracle Solaris, from the Oracle Solaris blog.
Live storage migration for kernel zones, from The Zones Zone blog.
Ops Center
Recovering LDoms From a Failed Server, from the Ops Center blog.
EBS
From the Oracle E-Business Suite Support blog:
Webcast: Setup & Troubleshooting Dunning Plans in Oracle Advanced Collections
Troubleshooting the Closing of Work Orders in EAM and WIP
From the Oracle E-Business Suite Technology blog:
Database 12.1.0.2 Certified with EBS 11i on Additional Platforms
Transportable Database 12c Certified for EBS 12.2 Database Migration
Quarterly EBS Upgrade Recommendations: July 2015 Edition


Why Move to Cassandra?

Pythian Group - Thu, 2015-07-30 12:05

Nowadays Cassandra is getting a lot of attention, and we’re seeing more and more examples of companies moving to Cassandra. Why is this happening? Why are companies with solid IT structures and internal knowledge shifting, not only to a different paradigm (Read: NoSQL vs SQL), but also to completely different software? Companies don’t simply move to Cassandra because they feel like it. A drive or need must exist. In this post, I’m going to review a few use cases and highlight some of the interesting parts to explain why these particular companies adopted Cassandra. I will also try to address concerns about Cassandra in enterprise environments that have critical SLAs and other requirements. And at the end of this post, I will go over our experience with Cassandra.

Cassandra Use Cases Instagram

Cutting costs. How? Instagram was using an in-memory database before moving to Cassandra. Memory is expensive compared to disk. So if you do not need the advanced performance of an in-memory datastore, Cassandra can deliver the performance you need and help you save money on storage costs. Plus, as mentioned in the use case, Cassandra allows Instagram to continually add data to the cluster.  They also loved Cassandra’s reliability and availability features.

Ebay

Cassandra proved to be the best technology, among the ones they tested, for their scaling needs. With Cassandra, Ebay can look up historical behavioral data quickly and update their recommendation models with low latency. Ebay has deployed Cassandra across multiple data centers.

Spotify

Spotify moved to Cassandra because it’s a highly reliable and easily scalable datastore. Their old datastore was not able to keep up with the volume of writes and reads they had. Cassandra’s scalability with its multi-datacenter replication, plus its reliability, proved to be a hit for them.

Comcast

They were looking for 3 things: scale, availability, and active-active. Only Cassandra provided all of them. There transition to Cassandra went smoothly and enjoy the ease of development Cassandra offers.

Cassandra brings something new to the game

NoSQL existed before Cassandra. There were also other mature technologies when Cassandra was released. So why didn’t companies move to those technologies?

Like the subtitle says, Cassandra brings something new to the game. In my experience, and as discussed in some of the use cases above, one of the strongest points is Cassandra’s ease of use. Once you know how to configure  Cassandra, it’s almost “fire-and-forget”! It just works. In an era like ours, where you see new technologies appear every day, on different stacks, with different dependencies, Cassandra easy installation and basic configuration is refreshingly simple, which leads us to…

Scalability!! Yes it scales linearly. This level of scalability, combined with its ease of deployment, takes your infrastructure to another level.

Last but not least, Cassandra is highly flexible. You can tweak your consistency settings per transaction. You need more speed? Pick less consistency. You want data integrity? Push those consistency settings up. It is up to you, your project, and your requirements. And you can easily change it.

Also don’t forget its other benefits: open source, free, geo-replication, low latency etc…

Pythian’s Experience with Cassandra

Cassandra is not without its challenges. Like I said earlier, it is new technology that makes you think differently about databases. And because it’s easy to deploy and work with, it can lead to mistakes that could seriously impact scalability, and applications/services performance when they start to scale.

And that is where we come in. We ensure that companies just starting out with Cassandra have well built and well designed deployments, so they don’t run into these problems. Starting with a solid architecture plan for a Cassandra deployment and the correct data model can make a whole lot of difference!

We’ve seen some deployments that started out well, but without proper maintenance, fell into some of the pitfalls or edge-cases mentioned above. We help out by fixing the problem and/or providing recommended changes to the original deployment, so it will keep performing well without without issues! And because Cassandra delivers high resilience, many of these problems can be solved without having to deal with downtime.

Thinking about moving to Cassandra? Not sure if open source or enterprise is right for you? Need project support? Schedule a free assessment so we can help you with next steps!

The post Why Move to Cassandra? appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Advantages of using REST-based Integrations in PeopleSoft

Javier Delgado - Thu, 2015-07-30 08:49
REST-based services support were introduced in PeopleTools 8.52, although you may also build your own REST services using IScripts in previous releases (*). With PeopleTools 8.52, Integration Broker includes support for REST services, enabling PeopleSoft to act as both a consumer and a provider.

What is REST?
There is plenty of documentation in the Web about REST, its characteristics and benefits. I personally find the tutorial published by Dr. Elkstein (http://rest.elkstein.org) particularly illustrating.

In a nutshell, REST can be seen as a lightweight alternative to other traditional Web Services mechanisms such as RPC or SOAP. A REST integration has considerably less overhead than the two previously mentioned methods, and as a result is more efficient for many types of integrations.

Today, REST is the dominating standard for mobile applications (many of which use REST integrations to interact with the backend) and Rich Internet Applications using AJAX.

PeopleSoft Support
As I mentioned before, PeopleSoft support was included in PeopleTools 8.52. This included the possibility to use the Provide Web Service Wizard for REST services on top of the already supported SOAP services. Also, the Send Master and Handler Tester utilities were updated so they could be used with REST.

PeopleTools 8.53 delivered support for one of the most interesting features of REST GET integrations: caching. Using this feature, PeopleSoft can, as a service provider, indicate that the response should be cached (using the SetRESTCache method of the Message object). In this way, the next time a consumer asks for the service, the response will be retrieved from the cache instead of executing the service again. This is particularly useful when the returned information does not change very often (ie.: list of countries, languages, etc.), and can lead to performance gains over a similar SOAP integration.

PeopleTools 8.54 brought, as in many other areas, significant improvements to the PeopleSoft support. In first place, the security of inbound services (in which PeopleSoft acts as the provider) was enhanced to require that the services are consumed using SSL, basic HTTP authentication, and basic HTTP authentication and SSL, or none of these.

On top of that, Query Access Services (QAS) were also made accessible through REST, so the creation of new provider services can be as easy as creating a new query and exposing it to REST.

Finally, the new Mobile Application Platform (an alternative way to FLUID to mobilise PeopleSoft contents) also uses REST as a cornerstone.

Conclusions
Although REST support is relatively new compared to SOAP web services, it has been supported by PeopleSoft for a while now. Its efficiency and performance (remember GET services caching) makes it an ideal choice for multiple integration scenarios. I'm currently building a mobile platform that interacts with PeopleSoft using REST services. This is keeping me busy and you may have noticed that I'm not posting so regularly in this blog, but hopefully in some time from now I will be able to share with you some learned lessons from a large scale REST implementation.


(*) Although it's possible to build REST services using IScripts, the Integration Broker solution introduced in PeopleTools 8.52 is considerably easier to implement and maintain. So, if you are in PeopleTools 8.52 release or higher, Integration Broker would be the preferred approach. If you are in an earlier release, actually a PeopleTools upgrade would the preferred approach, but I understand there might be other constraints. :)