Skip navigation.

Feed aggregator

Measuring Tuxedo Queuing in the PeopleSoft Application Server

David Kurtz - 2 hours 22 min ago

Why Should I Care About Queuing?Queuing in the application server is usually an indicator of a performance problem, rather than a problem in its own right.  Requests will back up on the inbound queue because the application server cannot process them as fast as they arrive.  This is usually seen on the APPQ which is serviced by the PSAPPSRV process, but applies to other server processes too.  Common causes include (but are not limited to):
  • Poor performance of either SQL on the database or PeopleCode executed within the application server is extending service duration
  • The application server domain is undersized for the load.  Increasing the number of application server domains or application server process may be appropriate.  However, before increasing the number of server process it is necessary to ensure that the physical server has sufficient memory and CPU to support the domain (if the application server CPU is overloaded then requests move from the Tuxedo queues to the operating system run queue).
  • The application server has too many server processes per queue causing contention in the systems calls that enqueue and dequeue requests to and from IPC queue structure.  A queue with more than 8-10 application server processes can exhibit this contention.  There will be a queue of inbound requests, but not all the server processes will be non-idle.
When user service requests spend time queuing in the application server, that time is part of the users' response time.  Application server queuing is generally to be avoided (although it may be the least worst alternative). 
What you do about queuing depends on the circumstances, but it is something that you do want to know about.
3 Ways to Measure Application Server QueuingThere are a number of ways to detect queuing in Tuxedo
  • Direct measurement of the Tuxedo domain using the tmadmin command-line interface.  A long time ago I wrote a shell script tuxmon.sh.  It periodically runs the printqueue and printserver commands on an application server and extracts comma separated data to a flat that can then be loaded into a database.  It would have to be configured for each domain in a system.
  • Direct Measurement with PeopleSoft Performance Monitor (PPM).  Events 301 and 302 simulate the printqueue and printserver commands.  However, event 301 only works from PT8.54 (and at the time of writing I am working on a PT8.53 system).  Even then, the measurements would only be taken once per event cycle, which defaults to every 5 minutes.  I wouldn't recommend increasing the sample frequency, so this will only ever be quite a coarse measurement.
  • Indirect Measurement from sampled PPM transactions.  Although includes time spent on the return queue and to unpack the Tuxedo message.  This technique is what the rest of this article is about.
Indirectly Measuring Application Server Queuing from Transactional DataEvery PIA and Portal request includes a Jolt call made by the PeopleSoft servlet to the domain.  The Jolt call is instrumented in PPM as transaction 115.  Various layers in the application server are instrumented in PPM, and the highest point is transaction 400 which where the service enters the PeopleSoft application server code.  Transaction 400 is always the immediate child of transaction 115.  The difference in the duration of these transactions is the duration of the following operations:
  • Transmit the message across the network from the web server to the JSH.  There is a persistent TCP socket connection.
  • To enqueue the message on the APPQ queue (including writing the message to disk if it cannot fit on the queue).
  •  Time spent in the queue
  • To dequeue the message from the queue (including reading the message back from disk it was written there).
  • To unpack the Tuxedo message and pass the information to the service function
  • And then repeat the process for the return message back to the web server via the JSH queue (which is not shown  in tmadmin)
I am going make an assumption that the majority of the time is spent by message waiting in the inbound queue and that time spent on the other activities is negligible.  This is not strictly true, but is good enough for practical purposes.  Any error means that I will tend to overestimate queuing.
Some simple arithmetic can convert this duration into an average queue length. A queue length of n means that n requests are waiting in the queue.  Each second there are n seconds of queue time.  So the number of seconds per second of queue time is the same as the queue length. 
I can take all the sampled transactions in a given time period and aggregate the time spent between transactions 115 and 400.  I must multiply it by the sampling ratio, and then divide it by the duration of the time period for which I am aggregating it.  That gives me the average queue length for that period.
This query aggregates queue time across all application server domains in each system.  It would be easy to examine a specific application server, web server or time period.
WITH c AS (
SELECT B.DBNAME, b.pm_sampling_rate
, TRUNC(c115.pm_agent_Strt_dttm,'mi') pm_agent_dttm
, A115.PM_DOMAIN_NAME web_domain_name
, SUBSTR(A400.PM_HOST_PORT,1,INSTR(A400.PM_HOST_PORT,':')-1) PM_tux_HOST
, SUBSTR(A400.PM_HOST_PORT,INSTR(A400.PM_HOST_PORT,':')+1) PM_tux_PORT
, A400.PM_DOMAIN_NAME tux_domain_name
, (C115.pm_trans_duration-C400.pm_trans_duration)/1000 qtime
FROM PSPMAGENT A115 /*Web server details*/
, PSPMAGENT A400 /*Application server details*/
, PSPMSYSDEFN B
, PSPMTRANSHIST C115 /*Jolt transaction*/
, PSPMTRANSHIST C400 /*Tuxedo transaction*/
WHERE A115.PM_SYSTEMID = B.PM_SYSTEMID
AND A115.PM_AGENT_INACTIVE = 'N'
AND C115.PM_AGENTID = A115.PM_AGENTID
AND C115.PM_TRANS_DEFN_SET=1
AND C115.PM_TRANS_DEFN_ID=115
AND C115.pm_trans_status = '1' /*valid transaction only*/
--
AND A400.PM_SYSTEMID = B.PM_SYSTEMID
AND A400.PM_AGENT_INACTIVE = 'N'
AND C400.PM_AGENTID = A400.PM_AGENTID
AND C400.PM_TRANS_DEFN_SET=1
AND C400.PM_TRANS_DEFN_ID=400
AND C400.pm_trans_status = '1' /*valid transaction only*/
--
AND C115.PM_INSTANCE_ID = C400.PM_PARENT_INST_ID /*parent-child relationship*/
AND C115.pm_trans_duration >= C400.pm_trans_duration
), x as (
SELECT dbname, pm_agent_dttm
, AVG(qtime) avg_qtime
, MAX(qtime) max_qtime
, c.pm_sampling_rate*sum(qtime)/60 avg_qlen
, c.pm_sampling_rate*count(*) num_services
GROUP BY dbname, pm_agent_dttm, pm_sampling_rate
)
SELECT * FROM x
ORDER BY dbname, pm_agent_dttm
  • Transactions are aggregated per minute, so the queue time is divided by 60 at the end of the calculation because we are measuring time in seconds.
Then the results from the query can be charted in excel (see http://www.go-faster.co.uk/scripts.htm#awr_wait.xls). This chart was taken from a real system undergoing a performance load test, and we could see


Is this calculation and assumption reasonable?The best way to validate this approach would be to measure queuing directly using tmadmin.  I could also try this on a PT8.54 system where event 301 will report the queuing.  This will have to wait for a future opportunity.
However, I can compare queuing with the number of busy application servers at reported by PPM event 302 for the CRM database.  Around 16:28 queuing all but disappears.  We can see that there were a few idle application servers which is consistent with the queue being cleared.  Later the queuing comes back, and most of the application servers are busy again.  So it looks reasonable.
Application Server Activity ©David Kurtz, Go-Faster Consultancy Ltd.

SQL for Beginners : Videos and Articles

Tim Hall - 8 hours 40 min ago

love-sqlI’ve been saying for some time I should do some more entry level content, but it’s been kind-of hard to motivate myself. I mostly write about things I’m learning or actively using, so going back and writing entry level content is not something that usually springs to mind.

Recently I’ve got involved in a number of “grumpy old man” conversations about the lack of SQL knowledge out there. That, combined with a few people at work getting re-skilled, prompted me to get off my ass and give it a go. It’s actually quite difficult trying to get yourself into the head-space of someone who is coming fresh to the subject. You don’t want to pitch it too low and sound patronizing, but then pitching it too high makes you sounds like an elitist dick.

Anyway, after completing the Efficient Function Calls from SQL series of videos, I decided to jump into a SQL for Beginners series. I’m also putting out some articles, which are essentially transcripts of the videos, to allow people to copy/paste the examples. More importantly, they have links to articles with more details about the subject matter.

Once I’ve done a quick pass through the basics, I’ll start adding a bit more depth. I’ll probably dip in and out of the series. If I stick with it too long I’ll probably go crazy from boredom. :)

If you know someone who is fresh to SQL, can you ask them to take a look and give me some feedback? It would be nice to know if they are helpful or not.

Cheers

Tim…

SQL for Beginners : Videos and Articles was first posted on September 4, 2015 at 8:54 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Access Manager (OAM) 11g : Architecture (Topic from our Training)

Online Apps DBA - 15 hours 18 min ago

OAM_Architecture

This post covers Oracle Access Manager (OAM) Architecture components and is from our Oracle Access Manager (OAM) 11g training that I’ll personally be teaching in live virtual class (starting 20th Aug). You can register for this training here

If you wish to watch FREE Video tutorials on OAM then subscribe to our YouTube Channel by clicking here

 

OAMArchitecture_2

 

Note: Image from Oracle A-Team’s blog (must read blog)

Oracle Access Manager 11g consists of

1. Database for OAM : Database hosts OAM’s metadata and policies defined by Administrators to secure business application. You use RCU to create OAM schema.
.
2. LDAP Server : This is Directory Server usually Oracle Internet Directory (OID), Oracle Unified Directory (OUD) or Microsoft Active Directory where users and groups are stored . By default OAM uses WebLogic’s embedded LDAP server but you change that to external LDAP mentioned earlier .

3. OAM Domain Admin Server : OAM is configured in WebLogic Domain (Admin & Managed Server). Admin Server hosts WebLogic Console and OAM’s Admin Console (GUI to manage OAM artefacts like Application Domain, Policies, WebGate Instance etc). We cover these OAM Artefacts on Day 4 of OAM Training

4. OAM Domain Managed Server : OAM Managed Server is run time component that acts as Policy Decision Point (PDP). WebGate (Policy Enforcement Point – PEP) connects to this server to get policy details for a resource.

5. Application : This is the resource that is protected by OAM. You can optionally configure OAM Agent on application.

6. WebServer : WebServers like OHS/Apache acts as reverse proxy to for Application and Policy Enforcement Point (WebGate) gets deployed on WebServer.

7. OAM Agents (WebGates) : are Policy Enforcement Points that are deployed on WebServer and connects to OAM Managed Server for policy decision.  We cover OHS & WebGate in detail on Day 3 of OAM Training

Stay tuned for my next post that covers, How OAM Request flow works and how all these components discussed above are used.

To know more on why you should learn Oracle Access Manager click here and what we cover in this online live virtual training click here

Quiz for you (answer under comments section or in our facebook group):

Q: OHS 12c comes with WebGate software so you don’t need to install WebGate software on OHS host
A: TRUE or FALSE

 

The post Oracle Access Manager (OAM) 11g : Architecture (Topic from our Training) appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

Oracle Priority Support Infogram for 03-SEP-2015

Oracle Infogram - 17 hours 37 min ago

RDBMS
What is the Oracle ASH time waited column?
Can you have Oracle Multitenant in Oracle 12.1.0.2 SE2?, from Upgrade your Database - NOW!
SQL
Improve SQL Query Performance by Using Bind Variables, from All Things SQL.
SQL Developer
Four Minute Video Tip: Configuring SQL Developer for the First Time, from that JEFF SMITH.
Scripting Oracle
node-oracledb 1.1.0 is on NPM (Node.js add-on for Oracle Database), from Scripting and Oracle: Christopher Jones.
MySQL
MySQL Enterprise Monitor 2.3.21 has been released, from MySQL Enterprise Tools Blog.
Solaris
Rapid fire Weblogic instances, from Solaris 11.
SOA
Top tweets SOA Partner Community – August 2015, from SOA & BPM Partner Community Blog.
Java
Just-in-Time Compilation with JITWatch, from The Java Source.
How to fix java.io.InvalidClassException error when accessing Oracle ACM Case API via Jdeveloper, from the SOA & BPM Partner Community Blog.
Hyperion
Patch Set Update: Hyperion Strategic Finance 11.1.2.3.507, from Business Analytics – Proactive Support.
From the same source:
EPM Patch Set Updates - August 2015
EBS
From the Oracle E-Business Suite Support blog:
Webcast: Data Quality Management (DQM) Search & Match Deep Dive
Getting Jave Errors when Loading Certificates in Oracle Exchange and iProcurement Punchout?
EBS AP, AR and EBTax Setup/Data Integrity Analyzer (Doc ID 1529429.1)
Webcast: Oracle Demantra Release 12.2.5.1, Part 1
EBS Financials August 2015 Recommended Patch Collections (RPCs) just released!!
From the Oracle E-Business Suite Technology blog:
"Certified" vs. "Error Correction Support": What's the Difference?
…and Finally
Magnetic Wormhole Connecting Two Regions of Space Created for the First Time, from EurekAlert.


node-oracledb 1.1.0 is on NPM (Node.js add-on for Oracle Database)

Christopher Jones - 18 hours 10 min ago

Version 1.1 of node-oracledb, the add-on for Node.js that powers high performance Oracle Database applications, is available on NPM

This is a stabilization release, with one improvement to the behavior of the local connection pool. The add-on now checks whether pool.release() should automatically drop sessions from the connection pool. This is triggered by conditions where the connection is deemed to have become unusable. A subsequent pool.getConnection() will, of course, create a new, replacement session if the pool needs to grow.

Immediately as we were about to release, we identified an issue with lobPrefetchSize. Instead of delaying the release, we have temporarily made setting this attribute a no-op.

The changes in this release are:

  • Enhanced pool.release() to drop the session if it is known to be unusable, allowing a new session to be created.

  • Optimized query memory allocation to account for different database-to-client character set expansions.

  • Fixed build warnings on Windows with VS 2015.

  • Fixed truncation issue while fetching numbers as strings.

  • Fixed AIX-specific failures with queries and RETURNING INTO clauses.

  • Fixed a crash with NULL or uninitialized REF CURSOR OUT bind variables.

  • Fixed potential memory leak when connecting throws an error.

  • Added a check to throw an error sooner when a CURSOR type is used for IN or IN OUT binds. (Support is pending).

  • Temporarily disabled setting lobPrefetchSize

Issues and questions about node-oracledb can be posted on GitHub or OTN. We need your input to help us prioritize work on the add-on. Drop us a line!

Installation instructions are here.

Four Weeks with the Garmin Vivosmart

Oracle AppsLab - Thu, 2015-09-03 14:20

The Year of Data continues for me, and yesterday, I finished a four-week relationship with the Garmin Vivosmart.

I use relationship purposefully here because if you use a wearable to track fitness and sleep, you’re wearing it a lot, and it actually becomes a little friend (or enemy) that’s almost always with you. Wearables are very personal devices.

If you’re scoring at home, 2015 has gone thusly for me:

After that month of nothing, I nearly ended the experimentation. However, I already two more wearables new and still in the box. So, next up was the Vivosmart.

I didn’t know Garmin made wearables at all until OHUG 2014 where I met a couple people wearing Garmin devices. Turns out, Garmin makes an impressive array of wearable devices, running the gamut from casual to hardcore athlete.

I chose the Vivosmart, at the casual end of the spectrum, because of its display and notification capabilities.

As always, before I launch into my impressions, you might want to read real reviews from Engadget and The Verge.

The Band

Finally, a wearable that doesn’t require a laptop to configure. The setup was all mobile, download the app and pair, very nice for a change.

IMG_20150902_081908

After the initial setup, however, I did need to tether the Vivosmart to my laptop, but I don’t think my case is common.

The firmware version that came out-of-the-box was 2.60, and after reading the Engadget review, I decided to update to the latest version. Specifically, I wanted the notification actions that came in 3.40. There didn’t seem to be a way to get this update over-the-air, so I had to install Garmin Express on my Mac and tether the Vivosmart to install the update, a very quick and painless process.

This must have been because I was going through several updates because the Vivosmart got an over-the-air update at some point without Garmin Express.

Like all the rest, the Vivosmart has a custom cable for charging and tethering, and this one looks like mouthguard.

IMG_20150903_093246

81zwVNSnQ4L._SY355_ 61o4wVFpd9L._SL1200_

Looks aside, getting the contacts to line up just right was a learning process, but happily, I didn’t charge it very often.

The low power, touch display is pretty cool. The band feels rubbery, and the display is completely integrated with no visible bezel, pretty impressive bit of industrial design. The display is surprisingly bright, easily visible in full sunlight and useful as a flashlight in the dark.

There are several screens you swipe to access, and they can be configured from the mobile app, e.g. I quickly ended up hiding the music control, more on that in a minute. Long-pressing opens another set of options and menus.

The Vivosmart has a sleep tracking, one thing I actually missed during my device cleanse. Like the Jawbone UP24, it provides a way to track sleep manually. I tried this and failed miserably because somehow during the night the sleep tracking ended.

The reason? The display activates when anything touches it. So, while I slept, the display touched the sheets, the pillow, etc. registering each touch as an interaction, which finally resulted in turning off sleep mode.

This is exactly how I discovered the find phone option. While using my laptop, I wore the Vivosmart upside down to prevent the metal Garmin clasp on the underside of the device from scratching the aluminum, a very common problem with wrist-worn accessories.

During a meeting my phone started blinking its camera flash and blaring a noise. A notification from Garmin Connect declared it had found my phone. I looked at the band, and sure enough, it was in one of the nested menus.

So, the screen is cool, but it tends to register everything it touches, even water activated it. Not to mention the rather unnerving experience of the display coming on in a dark room while partially awake, definitely not cool.

Luckily, I found the band and app auto-detect sleep, a huge save.

Functionally, the battery life was about five days, which is nice. When the battery got low, a low battery icon appeared on the time and date screen. You can see it in the picture. Once full, that icon disappeared, also nice.

The Vivosmart can control audio playing on the phone, a nice feature for running I guess. I run with Bluetooth headphones, and having two devices paired for audio confused my phone, causing it to play through its own speakers. So, I disabled the playback screen via the app.

Like most fitness bands, this one is water resistant to 5 ATM (50 meters), and I wore it in the shower with no ill effects, except for the random touches when water hit the device’s screen. I actually tested this by running water on it and using the water to navigate through the screens.

Syncing the band with the phone was an adventure. Sometimes, it was immediate. Other times, I had to toggle Bluetooth off/on. Could be my impatience, but the band would lose connectivity sometimes when it was clearly within range, so I don’t think it was me.

The Vivosmart has a move indicator which is nice as a reminder. However, I quickly disabled it because its times weren’t configurable, and it would go off while I was moving. Seriously, that happened a few times.

The App and Data

As with most fitness trackers, Garmin provides both a mobile app and a web app. Both are cleanly designed and easy to use, although I didn’t use the web app much at all. Garmin Connect has a nice array of features, to match the range of athletes to which they cater, I suppose.

Garmin Connect

Garmin Connect2

I probably only used 25% of the total features, and I liked what I used.

I did find the mobile app a bit tree-based, meaning I found myself backing up to the main dashboard and then proceeding into another section.

Garmin tracks the usual activity data, steps, calories, miles, etc. There’s a wide array of activities you can choose from, but I’m a boring treadmill runner so I used none of that.

For sleep, it tracks deep and light sleep and awake time, and I found something called “Sleep Mood” no idea what that is.

Screenshot_2015-09-02-08-50-17 Screenshot_2015-09-02-08-49-55 Screenshot_2015-09-02-08-50-27

One feature I don’t recall seeing anywhere else is the automatic goal setting for steps which increases incrementally as you meet your daily goal. The starting default was 7,500 steps, and each day, the goal rose a little, I assume based on how much I had surpassed it the previous day. It topped out at 13,610.

I passed the goal every day I wore the Vivosmart, so I don’t know what happens if you fail to meet it.

You can set the goal to be fixed, but I liked this daily challenge approach. There were days I worried I wouldn’t make the step number, and it actually did spur me to be more active. I guess I’m easily manipulated.

Possibly the biggest win for Garmin Connect is its notification capabilities. It supports call, text and calendar notifications, like some others do, but in addition, there is also a nice range of other apps from which you can get notifications.

Screenshot_2015-09-03-12-51-51 Screenshot_2015-09-03-12-51-57

And there’s the feature I mentioned earlier, taking actions from the band. I tried this with little success, but I only turned on notifications for text messages.

One possible reason why Garmin has such robust notifications may be its developer ecosystem. There’s a Garmin Connect API and a store for third party apps. I didn’t use any, mostly because I’m lazy.

That, and one of the kind volunteers for our guerrilla Apple Watch testing at OHUG warned me that some apps had borked his Garmin. He had the high-end fenix 3, quite a nice piece of technology in an Ultan-approved design.

Finally, Garmin Connect offers exports and integrations with other fitness services like RunKeeper, Strava, etc. They’re definitely developer-friendly, which we like.

Overall, I found the Vivosmart to be an average device, some stuff to like, some stuff to dislike. The bland black version I chose didn’t help; Ultan (@ultan) would hate it, but Garmin does offer some color options.

I like the apps and the ecosystem, and I think the wide range of devices Garmin offers should make them very sticky for people who move from casual running to higher level fitness.

If I end up going back to Garmin, I’ll probably get a different device. If only I could justify the fenix 3, I’m just not serious enough, would feel like a poseur.

Find the comments.Possibly Related Posts:

COLLABORATE16 IOUG – Call For Papers

Pythian Group - Thu, 2015-09-03 11:48

There’s so many ways to proceed
To get the knowledge you need
One of the best
Stands out from the rest
COLLABORATE16 – indeed!

Why not be part of the show
By sharing the stuff that you know
Got something to say
For your colleagues each day
Call for papers –> let’s go

I believe many of you would agree that regardless of how insignificant you believe your corner of the Oracle technology may be, everyone has something to say. I attended my first show in Anaheim CA USA in 1990 and started presenting at shows the year after in Washington DC USA. It’s not hard to get over the hump, moving from I would love to present a paper at a show but I just don’t have the koyich to wow that was fun. The only way you will ever get the strength is to do it (and do it and do it …).

Some suggestions for getting started …

  1. Co-present with a colleague
  2. Collaborate through paper and slides development WITH your colleague rather than parcel off portions to one another then merge at the end.
  3. Be cautions of trying to cover too much in too little time (I once attended a session at IOUW [a pre-cursor to COLLABORATE] where the presenter had over 400 slides to cover in 45 minutes].
  4. Ask for assistance from seasoned presenters (mentor/protégé type relationship).
  5. Go slowly at first and set yourself some realistic but aggressive goals.

The experience of presenting at shows is rewarding and I for one do it as much as I can … Ensuring Your Physical Standby is Usable and Time to Upgrade to 12c (posting of 2015 presentation details pending).

The confidence gain, personal koyich, and rewards of presenting at events are life long and can help propel your career into the ionosphere. Speaking of confidence, 20 months ago I started playing bridge. Now look where my experience presenting at shows and writing for Oracle Press got me … check this out :).

Surprises surprises abound
With the new confidence found
Presenting is great
Get now on your plate
In no time you’ll be so renowned

 

Discover more about our expertise in the world of Oracle.

Categories: DBA Blogs

Oracle EBS R12.2: Restarting Online Patching Enablement patch

Pythian Group - Thu, 2015-09-03 11:33

If you are in process of upgrading to Oracle E-Business Suite 12.2.4, you would have went though this critical phase in the upgrade which is to apply the Online Patching Enablement patch:

13543062:R12.AD.C.

It’s very common to run into errors with this patch in the first try and have to apply it couple of times, in order to get all issues fixed and get online patching enabled. The recommended command to apply this patch is:

adpatch options=hotpatch,forceapply

When the time comes to re-apply the patch to fix problems, if you use the same command to reapply the patch, you will notice that the patch completed normal with in no time and nothing happens in the back end. This is because of a specific feature from Adpatch. ADPATCH by default skips jobs that are marked as “run successfully” in previous runs or as part of another patch. So we have to force it re-run those jobs. This can be done by using command below:

adpatch options=hotpatch,forceapply,nocheckfile

Sometimes we run into cases where Online Patching Enablement patch completes as “normal” and the actual online patching feature gets enabled where we see that a schema or two have failed to enable the EBR feature. As soon as APPS schema gets EBR enabled by this patch, even though other custom schemas failed to get enabled, Adpatch gets disabled and we are forced to adop utility from then on. In this scenario, we can still re-apply the Online Patch Enablement using Adpatch after setting the environment variable below:

export ENABLE_ADPATCH=YES

I see that online patching enablement exercise for every customer is a unique experience. Do post your experiences with this online patching enablement patch in the comments section. I’d love to hear your story!

Discover more about Pythian’s expertise in the world of Oracle.

Categories: DBA Blogs

VMware Debuts SQL Server DBaaS Platform

Pythian Group - Thu, 2015-09-03 11:14

VmWare1

Yesterday at VMworld, VMware announced its entry into the managed database platform market with the introduction of vCloud Air SQL. This new service is an on-demand, managed service offering of Microsoft SQL Server. It’s meant to further the adoption of hybrid operations, since it can be used to extend on-premises SQL Server use into the cloud.

Currently the two major players in this space are Amazon RDS and Azure SQL. Both of those offerings are significantly more mature and feature-rich than VMware’s service as outlined in the early access User Guide.

The beta version of vCloud Air SQL has a number of initial limitations such as:

  • SQL Server Licensing is not included or available. Meaning that the vCloud Air SQL platform is utilizing a “bring your own license” (BYOL) model. This requires that you have an enterprise agreement with software assurance in order to leverage license mobility for existing instances.
  • SQL 2014 is not currently offered, only SQL 2008 & SQL 2012 are supported at this time.
  • SQL Server Instances are limited to 150GB
  • Service tiers are limited to three choices at launch and altering the service tier of an existing instance is not supported at this time.

Although there are a number of limitations, reviewing the early access beta documentation reveals some interesting details about this service offering:

  • “Instant” Snapshot capabilities appear to be superior to any competitors managed service offerings. These features will be appealing to organizations leveraging DevOps and automated provisioning methodologies.
  • Persistent storage is solid state (SSD) based and will likely be more performant than competing HDD offerings.
  • A new cloud service named vCloud Air SQL DR is planned as a companion product. This service will integrate with an organization’s existing on-premises SQL Server instances. Once integrated, it will provide a variety of cloud based disaster recovery options leveraging Asynchronous replication topologies.

If you want to try this new service, VMware is offering a $300 Credit for first time vCloud Air users HERE.

Discover more about Pythian’s expertise in SQL Server.

 

 

Categories: DBA Blogs

Kumpulan Base COC Unik dan Aneh

Daniel Fink - Thu, 2015-09-03 10:50
Kumpulan Base COC Unik dan Aneh Terbaru 2015 - Kita kadang kala pernah mengalami kebosanan menggunakan base pertahanan, baik itu untuk farming ataupun untuk defensiv. Sehebat apapun base kamu, mau anti maling, anti jebol, anti tembus, anti rampok, anti bintang 2 dan 3 tetep aja pernah bosan. Untuk itu kita perlu sesuatu yang baru. Oke, mungkin sesuatu yang baru itu adalah mengubah bentuk base kita. OptimalDBA kali ini mempunyai kumpulan gambar base paling unik, aneh, menarik, lucu dan kreatif TH 6, 7, 8, 9, dan 10. Base ini lumayan menarik, yang dapat kita pakai di saat sedang mengalami kebosanan.

Namun perlu di ingat. Model Base seperti ini pasti memiliki kelemahan. Ya, sudah tentu mudah terbobol. Namun jika kamun sedang mengalami kejenuhan dapat di gunakan sebagai Base sementara, atau dapat di pakai saat sedang Prepare War. Namun jikan war sudah di mulai segeralah untuk menggantinya. Tanpa bicara banyak lagi, langsung sajan berikut adalah kumpulan base lucu, unik, aneh TH 7, 8, 9 dan 10 Terbaru.
Base COC UnikGambar Base Unik, Lucu, Aneh, MenarikTH 9
Gambar Base Unik, Lucu, Aneh, MenarikTH9
Gambar Base Unik, Lucu, Aneh, MenarikTH8
Gambar Base Unik, Lucu, Aneh, MenarikTH8
Gambar Base Unik, Lucu, Aneh, MenarikTH8
Gambar Base Unik, Lucu, Aneh, MenarikTH 7

Gambar Base Unik, Lucu, Aneh, MenarikTH 7
Gambar Base Unik, Lucu, Aneh, MenarikTH 8
Gambar Base Unik, Lucu, Aneh, MenarikTH 9
Gambar Base Unik, Lucu, Aneh, MenarikTH 10
Gambar Base Unik, Lucu, Aneh, MenarikTH 10
Gambar Base Unik, Lucu, Aneh, MenarikTH 8
Gambar Base Unik, Lucu, Aneh, MenarikTH 10
Gambar Base Unik, Lucu, Aneh, MenarikTH 10
Gambar Base Unik, Lucu, Aneh, MenarikTH 8
Gambar Base Unik, Lucu, Aneh, MenarikTH 10
Gambar Base Unik, Lucu, Aneh, MenarikTH 8
Silahkan sobat untuk mendownloadnya. Semoga saja artikel ini dapat membantu teman-teman yang sedang mencari gambar base lucu, unik, dan aneh.

UKOUG Partner of the Year Awards 2015

Rittman Mead Consulting - Thu, 2015-09-03 08:48

PYA-2015

It’s that time of year again for the UKOUG Partner of the Year Awards. This year we have been nominated for 4 awards:

  • Engineered Systems Partner of the Year Award
  • Business Analytics Partner of the Year Award
  • Training Partner of the Year Award
  • Emerging Partner of the Year Award

The awards are decided by “end users of Oracle-related products or services” i.e. you, so we would like to ask you to vote for us by going to this link.

I would like to propose four reasons why I think we deserve these awards.

Research, development and sharing

The culture at Rittman Mead has always been to delve into the tools we use, push their boundaries and share what we learn with the community. Internally, we have formalised this by having our own in house R&D department. People like Robin Moffatt, Jordan Meyer and Mark Rittman spend a lot of time and effort looking at the core Oracle data and analytics toolset to determine the optimal way to use it and see which other leading edge tools can be integrated into it.

This has given rise to a huge amount of freely available information ranging from a whole series on OBIEE performance tuning to drinks cabinet optimisation.

We have also worked with Oracle to produce a new version of their reference architecture that was the first one to incorporate the new wave of big data technologies and approaches such as Hadoop and a data reservoir.

Delivery

One of the main drivers for our R&D department is to make us more effective at delivering data and analytics projects.

We are continually investigating common and new approaches and design patterns found in the world of ETL, data warehousing, business intelligence, analytics, data science, big data and agile project delivery, and combining them with our experience to define optimal ways deliver projects.

Again, we share a lot of these approaches through talks at Oracle and community conferences, blog posts and routines shared on our GitHub repository.

Learning and education

Learning is key to everything we do in life, and as such, we see the provision of independent courses for Oracle business intelligence and data integration tools as key for the industry. We have developed our own training materials based on the different roles people play on projects, for example we have a Business Enablement Bootcamp aimed at end users and OBIEE Bootcamp aimed at developers. We know from our feedback forms how effective this training is.

To supplement the training materials we also wrote the official OBIEE Oracle Press book based around the same examples and data sets.

Optimisation

Our key role as an Oracle partner and member of the Oracle community is to optimise the value any organisation gets from investing in Oracle data and analytics related software and hardware.

This is something that requires a long term commitment, a high level of investment and a deep level of knowledge and experience, which is hopefully demonstrated above. To this end, we are prepared to often go beyond the level of information that Oracle can offer and in certain cases challenge their own understanding of the tools.

We were the first UK partner to buy an Exalytics server, for example, and have written a whole host of articles around the subject. Similarly we are the proud owner of a BICS pod and we are now evaluating how organisations can effectively use cloud in their strategic business intelligence architectures and then, if they do, the best approach to integrating it.

Finally, we are also investing heavily in user engagement, providing the capability to measure then optimise an organisation’s data and analytics systems. We believe user engagement is directly and measurably linked to the return organisations get from their investment in Oracle data and analytics software and hardware.

In Summary

So, in summary, I hope that the reasons that I outline above explain why we deserve some or all of the above awards, as they act as a great way to recognise the effort put in by all our staff over the course of the year. The voting link is here.

Categories: BI & Warehousing

Autoconfig in Oracle EBS R12.2

Pythian Group - Thu, 2015-09-03 08:28

All seasonal Oracle Apps DBAs know that Autoconfig is the master utility that can configure the whole E-Business Suite Instance. In E-Business Suite releases 11i, 12.0 and 12.1 running Autoconfig recreated all the relevant configurations files used by Apache server. If the context file has the correct settings, then configuration files should include the correct setting after running Autoconfig. This is not the case anymore in Oracle E-Business Suite 12.2. Some of the Apache config files are under fusion middleware control now, namely httpd.conf, admin.conf and ssl.conf. All other Apache config files are still under Autoconfig control. But these 3 critical config files include the main config pieces like Webport, SSL port etc.

So if you have to change the port used by EBS instance, then you have to log into the Weblogic admin console and change port there and then sync context xml file using adSyncContext.pl. This adSyncContext.pl utility will get the current port values from Weblogic console and update the xml with new port values. Once the context xml file syncs, we have to run Autoconfig to sync other config files and database profile values to pickup new webport

Similarly, if you want to change the JVM augments or class path, you have run another utility called adProvisionEBS.pl to make those changes from command line or login to the Weblogic admin console to do those changes. Interestingly, few of the changes done in Weblogic admin console or fusion middleware control are automatically synchronized with context xml file by the adRegisterWLSListeners.pl script that runs in the background all the time. But Apache config file changes were not picked by this script, so Apache changes had to be manually synchronized

There are a few My Oracle Support notes that can help you understand these utilities little more, such as 1676430.1 and 1905593.1. But understand that Autoconfig is a different ball game in Oracle E-Business Suite R12.2.

Discover more about Pythian’s expertise in the world of Oracle.

Categories: DBA Blogs

Collaboration Goes Beyond File Sharing

WebCenter Team - Thu, 2015-09-03 06:31

Yes, Oracle Documents Cloud Service is an enterprise file sync and share (EFSS) solution. Yes, it is in the cloud as you would expect from a modern day EFSS solution. And yes, it is easy and intuitive to use with out-of-the-box mobile accessibility with the added security that you would expect from an Oracle solution. But why do we keep insisting it is more than just an EFSS or file sharing solution? Because beyond just providing cloud storage and 24/7 access and share and sync capabilities and mobile access, Documents Cloud mobilizes your enterprise content. Unlike first-generation EFSS solutions, it doesn't create new information silo's and instead allows you to drive the content connected to your applications, to your business processes or to your source of record (like an enterprise content management system that is on premise) and make it available anywhere, for sharing and for collaboration.

Here's a quick look at how it may make your work life that much easier, more productive, much more efficient and perhaps even, fun?

For more information, visit us at oracle.com/digitalcollaboration.

Amazon RDS Migration Tool

Pythian Group - Wed, 2015-09-02 15:06

Amazon has just released their RDS Migration Tool, and Pythian has recently undertaken training to use for our clients. I wanted to share my initial thoughts on the tool, give some background on its internals, and provide a walk-through on the functionality it will be most commonly used for.

There are many factors to consider when evaluating cloud service providers, including cost, performance, and high availability and disaster recovery options. One of the most critical and overlooked elements of any cloud offering though, is the ease of migration. Often, weeks are spent evaluating all of the options only to discover after the choice is made that it will take hours of expensive downtime to complete the migration, and that there is no good rollback option in the case of failure.

In order to reduce the friction inherent in the move to a DBaaS offering, Amazon has developed an RDS Migration tool. This is an in-depth look at this new tool, which will be available after September 1, 2015. Contact Pythian to start a database migration.

With the introduction of the RDS Migration tool, Amazon has provided a powerful engine capable of handling much more than basic migration tasks. It works natively with Oracle, SQL Server, Sybase, MySQL, PostgreSQL, Redshift (target only), Aurora (target only), and provides an ODBC connector for all other source systems. The engine is powerful enough to handle fairly complex transformations and replication topologies; however, it is a migration tool and isn’t intended for long-term use.

Architecture

Amazon’s RDS Migration Tool architecture is very simple. It consists of your source system, an AWS VM with the Migration Tool installed on it, and the target RDS instance.

Each migration is broken up into Tasks. Within a Task, a source and target database are defined, along with the ability to transform the data, filter the tables or data being moved, and perform complex transformations.

Tasks can be scheduled to run at particular times, can be paused and resumed, and can alert on success or failure. It’s important to note that if a task is paused while a table is loading, that table will be reloaded completely from the beginning when the task resumes.

Within a running task, the following high-level steps are performed:
• Data is pulled from the source using a single thread per table
• Data is converted into a generic data type
• All transformations are applied
• Data is re-converted into the target system’s datatype and inserted
• After the initial load, if specified, the tool monitors for updates to data and applies them in near real-time

While processing the data, each table has a single thread reading from it, and any updates are captured using the source system’s native change data capture utility. Changes are not applied until after the initial load is completed. This is done to avoid overloading the source system, where it’s assumed client applications will still be running.

Performance Considerations

There are several factors which might limit the performance seen when migrating a database.

Network Bandwidth
Probably the biggest contributor to performance issues across data centers, there is no magic button when moving to RDS. If the database is simply too big or too busy for the network to handle the data being sent across, then other options may need to be explored or used in conjunction with this tool.

Some workarounds to consider when network performance is slow include:
• Setup AWS Direct Connect
• Use a bulk-load utility, and then use the tool to catch up on transactions
• Only migrate data from a particular point in time

RDS Migration Tool Server CPU
The migration tool converts all data into a common data type before performing any transformations, then converts them into the target database’s data type. This is obviously very heavy on the server’s CPU, and this is where the main performance bottlenecks on the server are seen.

Capacity of Source database
This tool uses a single SELECT statement to migrate the data, and then returns for any changed data after the initial bulk load is completed. On a busy system, this can be a lot of undo and redo data to migrate, and the source system needs to be watched closely to ensure the log files don’t grow out of control.

Capacity of Target database
In the best case scenario, this will be the limiter as it means all other systems are moving very fast. Amazon does recommend disabling backups for the RDS system while the migration is running to minimize logging.

Walkthrough

The following walkthrough looks at the below capabilities of this tool in version 1.2:

• Bulk Data Migration to and from the client’s environment and Amazon RDS
• Near Real-Time Updates to data after the initial load is completed
• The ability to transform data or add auditing information on the fly
• Filtering capabilities at the table or schema level

You will need to have setup network access to your databases for the RDS Migration Tool.

1. After confirming access with your account manager, access the tool by opening the AWS console, selecting EC2, and choosing AMIs.
AWS Console

2. Select the correct AMI and build your new VM. Amazon recommends an M4.large or M4.xlarge.

3. After building the new VM, you will need to install the connectors for your database engine. In this example, we’ll be using Oracle Instant Client 12.1.0.2 and MySQL ODBC Connector 5.2.7.

  • For the SQL Server client tools, you will need to stop the Migration services before installing.

4. Access the Migration Tool

  • Within VM: http://localhost/AmazonRDSMigrationConsole/
  • Public URL: https:[VM-DNS]/AmazonRDSMigrationConsole/
    • Username/Password is the Administrator login to the VM

5. The first screen after logging in displays all of your current tasks and their statuses.
RDS Migration Tool Home Screen

6. Clicking on the Tasks menu in the upper-left corner will bring up a drop-down menu to access Global Settings. From here, you can set Notifications, Error Handling, Logging, etc…
RDS Migration Tool Global Settings

7. Back on the Tasks menu, click the Manage Databases button to add the source and target databases. As mentioned earlier, this walkthrough will be an Oracle to Aurora migration. Aurora targets are a MySQL database for the purposes of this tool.
RDS Migration Tool Manage Databases Pop-Up

8. After defining your connections, close the Manage Databases pop-up and select New Task. Here, you can define if the task will perform a bulk-load of your data and/or if it will attempt to apply changes made.
RDS Migration Tool New Task

9. After closing the New Task window, simply drag & drop the source and target connectors into the task.

10. By selecting Task Settings, you can now define task level settings such as number of threads, truncate or append data, and define how a restart is handled when the task is paused. You can also override the global error handling and logging settings here.

  • The best practice recommendation is to find the largest LOB value in your source database and set that as the max LOB size in the task. Setting this value allows the task to optimize LOB handling, and will give the best performance.

RDS Migration Tool Task Settings

11. Select the Table Selection button to choose which tables will be migrated. The tool uses wildcard searches to allow any combination of tables to exclude or include. For example, you can:

  • Include all tables in the database
  • Include all tables in a schema or set of schemas
  • Exclude individual tables and bring over all remaining tables
  • Include individual tables and exclude all remaining tables

The tool has an Expand List button which will display all tables that will be migrated.

In this screenshot, all tables in the MUSER08 schema that start with T1 will be migrated, while all tables that start with T2 will be excluded EXCEPT for the T20, T21, T22, & T23 tables.
RDS Migration Tool Table Selection

12. After defining which tables will be migrated, select an individual table and choose the Table Settings button. Here you can add transformations for the individual tables, add new columns or remove existing ones, and filter the data that is brought over.

In this screenshot, the T1 table records will only be brought over if the ID is greater than or equal to 50 and the C1 column is LIKE ‘Migrated%’
RDS Migration Tool Table Settings

13. Select the Global Transformations button. Like the table selection screen, you use wildcards to define which tables these transformations will be applied to.
You can:

  • Rename the schema
  • Rename the table
  • Rename columns
  • Add new columns
  • Drop existing columns
  • Change the column data types

In this screenshot, a new column named MigratedDateTime will be created on all tables and populated with the current DateTime value.
RDS Migration Tool Global Transformations

14. Finally, save the task and choose Run. This will kick off the migration process and bring up the Monitoring window. From here, you can see the current task’s status, notifications, and errors, as well as get an idea of the remaining time.
RDS Migration Tool Monitoring Window

Categories: DBA Blogs

Reducing User Friction

Oracle AppsLab - Wed, 2015-09-02 14:27

A few nights ago a Domino’s Pizza commercial got my attention. It is called “Sarah Loves Emoji.”

At the end, the fictional character Sarah finishes by simply saying “only Domino’s gets me.

The idea of texting an emoji, tweeting, using a Smart TV, or a smartwatch to automagically order pizza fascinates me. What Domino’s is attempting to do here is to reduce user friction, which is defined as anything that prevents a user from accomplishing a goal.  After researching Domino’s Anywhere user experiences, I found a negative post of a frustrated user, of course! Thus proving that even if the system is designed to reduce friction, the human element on the process is bound to fail at some point. Regardless I think is pretty cool that consumer oriented companies are thinking “outside the box.”

Screen Shot 2015-09-02 at 2.30.45 PM

As a long fan of building Instant Messaging (xmpp/jabber) and SMS (Twilio) bots, I understand how these technologies can actually increase productivity and reduce user friction. Even single-button devices (think Amazon Dash, or my Staples Easy Button hack) can actually serve some useful purpose.

I believe we will start to see more use cases, where input is no longer tied to a single Web UI or mobile app. Instead we will see how more ubiquitous input process like text, twitter, etc. can be used to start or complete a process. After all it seems like email and text are here to stay for a while, but that’s the content of a different post.

I think we should all strive that our customers will ultimate say that we “get them.”Possibly Related Posts:

Breaking: Totara LMS Forks From Moodle And Changes Relationship

Michael Feldstein - Wed, 2015-09-02 13:43

By Phil HillMore Posts (358)

What interesting timing. Just as I published my interview with Martin Dougiamas, I was notified that Totara LMS, a Moodle derivative aimed at the corporate learning market, has forked from Moodle and is changing its relationship with the Moodle Community. From their newsletter released today (Sept 3 Australia time):

The relationship between Totara and Moodle is changing

We have made the carefully considered decision that from 2016 Totara LMS will no longer be in lockstep with Moodle. This will free the team at Totara Learning to focus on big leaps forward in usability and modernising the framework for our enterprise customers.

Further down, Richard Wyles wrote an additional post explaining the fork, starting with his long-term relationship with Moodle. He then explains:

Why are we forking?

From 2016 onwards we will no longer be in lockstep. Totara LMS will progressively diverge from its Moodle foundations.

Why have we made this decision? There are several factors;

  1. Innovation. A benefit of open source software is the ability to extend the code base of an application and develop it in a new direction. Over the past few years we have added more than 450,000 lines of code comprising a series of modular, interwoven extensions layered on top of a standard Moodle. All the additional features reflect the different needs of our user community and Totara LMS is now almost unrecognisable from a standard Moodle installation. We’ve taken a lot of care to achieve these results with minimal alterations to Moodle’s core codebase. That policy has been beneficial to both projects. However it also comes with constraints, particularly with some feature requests such as multi-tenancy. To do this well requires deep architectural changes. Overall, to continue, and accelerate our rate of innovation we need to start diverging the base platforms.
  2. Modernising the platform. It is our view, and we know it is a shared view with many Totara Partners, that the current product needs a significant investment in the overall UX. Due to the following point regarding collaboration we are unable to make this investment without diverging from Moodle. We are committed to doing the best by our Totara Partners, key stakeholders in our open source ecosystem, and our growing (collective) customer base. Our 2016 release (which will be tagged as Totara LMS version 9.0) will have a major focus on improving the UX design and overall quality assurance.

Richard goes on with other reasons and concludes:

The decision to forge a new direction is simply based on the need to deliver the best product we’re able – fit for purpose for modern workplace learning, guided by the needs of our partners and customers.

The Totara LMS home page links to a YouTube video introduction, and I note that the lack of reference to “Moodle” name.

Wow. This is a significant move for several reasons, including the following:

  • The long-term relationship of Richard and others in Totara to the Moodle Community, which will now diverge;
  • The importance of corporate learning for many, if not most, Moodle Partners;
  • One of the reasons not quoted above in Richard’s post is that “The leadership of Moodle Pty Ltd has made it clear to us that it is their intent to clone recent Totara LMS versions to offer the market ‘Moodle for Workplace.’” (read Richard’s post in full); and
  • Totara has contributed an large amount of code to Moodle, including “with Moodle HQ incorporating Totara developed features; Learning Plans and Competencies”.

I will now extend my core argument from last week’s post on Blackboard’s Moodle strategy in Latin America.

The Moodle community at large appears to be at an inflection point. This inflection point I see comes from a variety of triggers:

  • Blackboard acquisitions causing Moodle HQ, other Moodle Partners, and some subset of users’ concerns about commercialization;
  • Creation of the Moodle Association as well as Moodle Cloud services as alternate paths to Moodle Partners for revenue and setup;
  • Remote-Learner leaving the Moodle Partner program and planning to join the Moodle Association, with its associated lost revenue and public questioning value; and
  • Totara LMS forking and diverging from Moodle core.

Analysis post coming soon.

The post Breaking: Totara LMS Forks From Moodle And Changes Relationship appeared first on e-Literate.

Interview With Martin Dougiamas On Changes To Moodle Community This Year

Michael Feldstein - Wed, 2015-09-02 12:59

By Phil HillMore Posts (358)

In my post last week on Blackboard’s Moodle strategy in Latin America, I made the following observation:

At the same time, this strategy and growth comes at a time where the Moodle community at large appears to be at an inflection point. This inflection point I see comes from a variety of triggers:

  • Blackboard acquisitions causing Moodle HQ, other Moodle Partners, and some subset of users’ concerns about commercialization;
  • Creation of the Moodle Association as well as Moodle Cloud services as alternate paths to Moodle Partners for revenue and setup; and
  • Remote-Learner leaving the Moodle Partner program and planning to join the Moodle Association, with its associated lost revenue and public questioning value.

I’m working on a follow-up post that looks more deeply at these changes to the Moodle community, and as part of the research I’ve interviewed Martin Dougiamas, Moodle Founder and CEO, by email. Given Martin’s role, I wanted to avoid the risk of having his answers get buried within my upcoming analysis post; therefore, I’ve decided to publish the interview in full. The only changes I have made are for clarity: showing and correcting[1] full names instead of acronyms[2], correcting grammar, and reordering questions to show follow-up discussions in context.

Phil: Given Blackboard’s trend in acquisitions for Moodle (Remote-Learner UK, X-Ray Analytics, Nivel Siete), and assuming these are not the last, how do these moves affect the Moodle community and future (including roadmap, Moodle HQ funding, whatever)? What are the biggest benefits and / or what are the risks and downsides?

Martin: In any community there’s always going to be some concern about any one organisation trying to gain dominance. Our certified Moodle Partner program was designed specifically to avoid these kind of risks by building a large global network of different companies (currently 68 and growing, including Moonami and Elearning Experts recently in the US) who are committed to supporting Moodle HQ. The recent Blackboard acquisitions don’t bring any benefits to Moodle as a whole.

Phil: When you say “the recent Blackboard acquisitions don’t bring any benefits to Moodle as a whole”, I note that in Latin America the only other Moodle Partners are in Argentina (1) and Brazil (3). Would Blackboard / Nivel Siete expansion to service most of Latin America end up generating more official Moodle Partner revenue, thus helping fund more core development through HQ?

Martin: We have South American Moodle Partners in Argentina, Bolivia, Chile, Peru and several in Brazil, as well as Partners who work in South America from other locations. Our Partner program is all about supporting local businesses who are Moodle experts, and they support us by paying royalties.

There is always some talk around acquisitions which it’s good to be mindful of. From a Moodle point of view there’s no new “expansion” – it was already happening.

Nivel Siete, like Moodlerooms, was a tiny company of several people who grew to 20 or so people with our support over many years. Meanwhile, Blackboard has had offices and resellers selling Blackboard Learn in South America for many years. As you know, acquisitions usually happen to remove a competitor or to gain some capabilities that the buying company was not able to develop on their own.

Phil: Do you agree with my characterization that “Moodle community at large appears to be at an inflection point” this year, driven by the three examples listed?

Martin: Sorry, I don’t really agree with your characterization. Unlike nearly all other LMS companies, Moodle is not profit-focussed (all our revenue goes into salaries). We are an organisation that is completely focussed on supplying a true open source alternative for the world without resorting to venture capital and the profit-driven thinking that comes with that.

Of course we still want to grow our core development team significantly in order to help Moodle evolve faster. So some of the big new things you’re seeing from us this year have been in the pipeline for a while and are about driving that: the Moodle Association is a formalisation of crowd-funding for additional new core developments; and MoodleCloud is very much about supporting and strengthening the Moodle Partner brand (while helping those who want these new services).

Regarding our ex-Partner Remote-Learner, it’s a shame we’ve lost them as friends but they are driven by their own internal issues. Saying they have switched to the Association is a little like saying you switched to Kickstarter, it doesn’t mean much. In any case they cannot actually even join the Moodle Association as commercial LMS service providers are not eligible.

Phil: My note on “inflection point” is not based on a profit-driven assumption. The idea is that significant changes are underway that could change the future direction of Moodle. A lot depends on Blackboard’s acquisition strategy (assuming it goes beyond Remote-Learner UK and Nivel Siete), whether other Moodle Partners follow Remote-Learner’s decision, and whether Moodle Association shows signs of producing similar or larger revenues than the Moodle Partner program. What I don’t see happening is extension of the status quo.

Martin: Moodle’s mission is not changing at all, we are just expanding and improving how we do things in response to a shifting edtech world. We are starting the Moodle Association to fill a gap that our users have often expressed to us – they wanted a way to have some more direct input over major changes in core Moodle. There is no overlap between this and the Moodle Partners – in fact we are also doing a great deal to improve and grow the Moodle Partner program and as well as the user experience for those who need Moodle services from them.

Phil: You have previously described the Moodle model as a ‘benevolent dictatorship’. Do you see that core model changing in the near future based on the three items I mentioned under inflection point (Moodle Association, Blackboard acquisitions, Remote-Learner leaving Moodle Partner program) or do you see roughly the same model but just with additional crowd-funding through Moodle Association? I think you’re answering the latter, but I want to make sure.

Martin: Yes, the latter.

I don’t use the ‘benevolent dictatorship’ term myself although it’s common in the open source world. Yes, I wrote everything in the first versions of Moodle, and my company continues to lead the project via Moodle Pty Ltd [aka Moodle HQ].

However, rather than any kind of dictatorship we see our mission as being *servants* to the community of teachers and learners who need Moodle and quality open source Free software. Our core duty is to give away the software we develop. Our values are to support educators with respect, integrity, openness and innovation. See https://moodle.com/hq/ This is never going to change.

This is in contrast to multi-billion companies whose value is in increasing their EBITDA [earnings before interest, taxes, depreciation and amortization] before a sale, and whose mission is to expand by acquiring markets in other countries.

Phil: Could you comment on the deep penetration of Moodle worldwide into corporate learning (maybe equal to higher ed / K-12)?

Martin: Yes, Moodle is used a lot in corporate learning worldwide. In fact something like 40% of the many thousands of clients using Moodle Partners as service providers are using Moodle for company training, including some really huge ones. We have a few case studies on our website at moodle.com/stories if you’re interested.

  1. Changing references to “Remote Learner” to follow the proper “Remote-Learner” usage
  2. For example, replacing “BB” with “Blackboard”, “NS” with “Nivel Siete”, etc

The post Interview With Martin Dougiamas On Changes To Moodle Community This Year appeared first on e-Literate.

Bigger Than Ever―Oracle’s Commerce Solutions at OpenWorld 2015

Linda Fishman Hoyle - Wed, 2015-09-02 12:37

A Guest Post by Jeri Kelley (avatar to the left), Senior Principal Product Manager, Oracle

There are a lot of great reasons for Oracle Commerce customers to attend OpenWorld at the end of October, including in-depth product updates, many customer success stories, hands-on labs, and networking events. Attendees will walk away with a better understanding of how Oracle’s commerce solutions can help them stay competitive in today’s rapidly changing commerce market.

What’s New and Different?

  • Meet Oracle Commerce Cloud―it's the newest addition to Oracle’s CX Applications portfolio. See demos, learn about the roadmap, and hear directly from our first customers leveraging this new product
  • Check out the Hands-on Labs: See how you can quickly stand up an online storefront with Oracle Commerce Cloud
  • Catch the Interactive Customer Showcases in the CX Commerce Demo Zone, featuring Oracle Commerce and Commerce Cloud customers

All sessions and the demo zone for customer experience will be located on 2nd floor of Moscone West in San Francisco.

Conference Sessions

Commerce attendees can explore best practices and share knowledge with more than 20 commerce-focused sessions:

  • Learn about roadmap and release updates
  • Get an in-depth look at Oracle Commerce Cloud
  • Attend thought-leadership sessions featuring Oracle strategy experts and industry analysts
  • Sit in on customer panels featuring both Oracle Commerce and Commerce Cloud customers
  • Experience manager and business control center best practice sessions
  • Listen to customer and partner case studies
  • Take part in more than just commerce-focused sessions and explore all that CX Central @ OpenWorld has to offer

Sessions of Special Interest

  • The Future of Oracle Commerce: Roadmap and Release Update (CON6303), Tuesday Oct. 27, 5:15-6:00 p.m., Moscone West, Room 2005
  • Meet Oracle Commerce Cloud―A New SaaS Solution for Commerce (CON8647), Wednesday, Oct. 28, 12:15-1:00 p.m., Moscone West Room 2005
  • Accelerating Success with Oracle Commerce―Panel discussion with KLX Aerospace, Tilly’s, and other Oracle Commerce Customers (CON8641), Tuesday, Oct. 27, 4:00-4:45 p.m., Moscone West, Room 2005
  • Building Commerce Experiences In The Cloud―Panel discussion with Rock/Creek, Hollander, and Elaine Turner (CON8842), Wednesday, Oct. 28, 3-3:45 p.m., Moscone West Room 2005

Guest Customer and Partner Appearances Include:

Vitamix, American Greetings, Maritz Reward Solutions, KLX Aerospace, Tilly’s, Ulta Rock/Creek, Hollander, Elaine Turner, JC Penney, Furniture Row, TOMS, Bodybuilding.com, Lojos Renner, Verizon, Razorfish, Compasso, SapientNitro, Cirrus10, and more!

Commerce Demo Zone

Take a break in the CX-Commerce Demo Zone. You’ll see the latest Oracle Commerce product demonstrations led by members of the Oracle Commerce product management and sales consulting teams. Take note of the latest features and learn from our customers at these demonstrations:

  • Oracle Commerce On-Premise: See the latest features for both B2C and B2B commerce
  • Oracle Commerce Cloud: Learn all about our newest offering
  • Interactive Customer Showcase: Stop by and visit Oracle Commerce and Commerce Cloud customers as they showcase their latest product offerings. You also can see how they are using Oracle Commerce or Commerce Cloud to power their online shopping experiences.
    • Note: These customers will be offering special OpenWorld-only discounts on their products, so make sure to stop by! Featured customers include Vitamix, Rock/Creek, Elaine Turner, and Hollander.

Customer Events

Finally, a preview of Oracle Commerce at OpenWorld would not be complete without a mention of customer appreciation events:

  • Monday, October 26: Commerce Customer Dinner @ The Waterbar Restaurant; by invitation only and your chance to network with Oracle Commerce product management and your commerce peers.
  • Tuesday, October 27: CX customer appreciation event; planning is in progress!
  • Wednesday, October 28: Oracle Appreciation Event at Treasure Island!

At a Glance

Visit Commerce—CX Central @ OpenWorld for full details on speakers, conference sessions, exhibits and entertainment!

We look forward to seeing everyone in San Francisco, October 25–October 29, 2015!

Using Data Relationship Management to Maintain Hierarchies for BI Apps (1)

Dylan's BI Notes - Wed, 2015-09-02 10:52
DRM is a generic data management application. It provides a web based application that allows the deploying company to maintain the data. It is a collaboration tool that allows you to define the validation and set up the data security duties and share the maintenance. Earlier the tool was designed to maintain the account information.  However, […]
Categories: BI & Warehousing