Skip navigation.

Feed aggregator

The investment is all about the PRODUCT, the PRODUCT, the PRODUCT.

Chris Warticki - Tue, 2015-04-21 13:46

Everyone wants to know where the 22% of the value of the Support investment goes?  I on the other hand, would like to know where the value of the 78% of the Product investment went?  It's quite easy to share with you the entire inventory of assets, resources, tools and best practices when it comes to the value of the support investment.

Let's talk Product for a moment. Take the Database for example.  Review the 40+ features in the Enterprise Edition.  How many are you using?  Most customers are using LESS THAN 10% of those features.  It's like having a very expensive excel spreadsheet one would like to call a database, and run very costly queries from it.  Are you capitalizing on the 9 Application Development Features?  Have you been running the same old SQL statements without taking advantage of the years worth of SQL improvements built-in to the the product?  Let's throw the Security card.  Nobody wants to wind up on the front page, nor on the 6 o'clock news because they just had a data breach.  But, are you utilizing the 6 Security Defense features?

Where am I headed with this?  Training.  That's where.
In the last decade, across the global economy, the first budget that was cut was training.  During my informal polls when working with our customers, they haven't taken a 3-5 day instructor-led class on the current version of the product in over 5-7 years!   I've met DBA's managing 12c the same way they installed and managed v6-7 of the RDBMS.

There is help. There are resources:

Back to the investment made in Oracle.  And, the re-investment.  2/3 of the Support investment goes back into R&D.  Billions of dollars every year.  Leverage the features and functions found within your Products.  Once you do, you gain efficiencies.  When efficiencies are gained, profitability is realized.  Oracle IS the example for using our own products, features and functions.  Oracle turned a dividend to the share holder in the company's 30th anniversary and continues to do so, in what some would term one of the most turbulent global economies ever.

The investment is in the Product, the Product, the Product.

Chris Warticki is member of the Global Customer Management team for Oracle Support Services
Tweeting @cwarticki

“Digital Disruption: It’s Not What You Think” by Oracle’s Rob Preston

Linda Fishman Hoyle - Tue, 2015-04-21 12:21

A Guest Post by Oracle’s Rob Preston, Editorial Director, Content Central (pictured left)

TechTarget’s Tom Goodwin uses Uber, Facebook, Alibaba, and Airbnb as examples of players with new tech-savvy business models that are disrupting their industries. Goodwin believes that our economic future lies in controlling the software interfaces that connect goods and services with the customer. In his mind, it’s all about the digital customer interface and things such as costly brick and mortar, physical goods, and intangible services are old school and irrelevant.

But Oracle’s Rob Preston contends (in a recent Forbes OracleVoice article) that digital disruption is much more than the customer interface. “It’s also about modernizing manufacturing and financial processes, overhauling entire supply chains, bringing more intelligence to marketing and sales strategies, making it easier for people and teams to collaborate, and rethinking talent recruitment and management,” he says. contends that digital disruption  runs much deeper than the customer interface.

Preston states that this disruption caused by cloud, mobile, data analytics, and social might not be as exciting as that caused by the likes of Uber and Airbnb, but it’s just as profound. Preston does a nice job of weaving in examples, industries, and insights into the article to make his case.

Productivity—A Priority for Every Business Leader

Linda Fishman Hoyle - Tue, 2015-04-21 12:12

A Guest Post by Caesar Peter (pictured left), Senior Director, CX Sales Applications Group

Oracle CloudWorld was held on the first week of April in India. Interestingly, the summit coincided with the new financial year in India, and hence, performance strategy was top of mind for almost every business leader with whom I met.

Generally, the most popular discussions during these events are about setting a higher sales budget, IT applications, project status, economy, etc. In contrast to this popular approach, it was wonderful to note that most of the business leaders spoke about improving the employees' productivity, spearheading the change right from their sales folks.

In fact, the head of sales of a leading consumer goods organization even took the time and described what he meant by productivity:

  1. Complete more activities in the limited (stipulated) time
  2. Ensure increased outcome in the activities

After hearing this, a senior person in the banking industry not only agreed, but also shared that their bank is in the process of evaluating a new set of modern, customer-facing tools that could facilitate their employees’ productivity. He shared that their criteria of evaluation are 1) availability (100 percent mobile, access from anywhere, everywhere), 2) agility (simple and easy to use), and 3) intelligent insights (alerts, reminders, notifications, and infolets).

As we all know, improving sales productivity can bring incredible value to the organization. For example, a 15 percent increase in the productivity of employees could add up to 10 percent to 15 percent improvement in the bottom line of the overall business performance. It is indeed wonderful to watch these leaders and organizations move from knowing how important sales productivity is to knowing how to make these improvements happen.

Find an Oracle CloudWorld event near you.

Conformed Dimension and Data Mining

Dylan's BI Notes - Mon, 2015-04-20 20:48
I believe that Conformed Dimensions are playing a key roles in data mining.  Here is why: A conformed dimension can bring the data together from different subject area, and sometime, from different source system. The relevant data can be thus brought together.  Data Mining is a technique to find the pattern from the historical data. […]
Categories: BI & Warehousing

Enable Real Application Security (RAS) in APEX 5.0

Dimitri Gielis - Mon, 2015-04-20 15:25
Oracle Database 12c introduced Oracle Real Application Security (RAS), the next generation Oracle Virtual Private Database (VPD). In APEX 5.0 RAS is declaratively build-in. Follow the below steps to enable it:

Login to the INTERNAL workspace and go to Manage Instance > Security:

In the Real Application Security section set Allow Real Application Security to Yes.

Next login to the Workspace your Application is build in and go to your Authentication Scheme.
You'll see a new section in there called Real Application Security.



The dropdown has following possibilities:

  • Disabled: Real Application Security does not get used in the application. 
  • Internal Users: APEX creates a RAS session and assumes that all users are internal and passes false via the is_external parameter to dbms_xs_sessions.assign_user. 
  • External Users: RAS session created and true gets passed via the is_external parameter to dbms_xs_sessions.assign_user. 

The last two options enable RAS Mode and make the Dynamic Roles and Namespaces shuttle available. (from the help in APEX) Make sure that the users get privileges to access the application's schema objects. For External Users, you can for example grant database privileges to a RAS Dynamic Application Role and configure it in this authentication scheme as a Dynamic Role. You can also enable roles via a call to apex_authorization.enable_dynamic_groups, e.g. in a Post-Authentication procedure.

You can read more about Oracle Real Application Security and view an entire example how to set up RAS at the database side. I'm still learning about all the RAS features myself too, but thought to already share the above. I plan to include a chapter in my e-book about RAS and APEX 5.0 with a real-case example, as I see a big benefit for using it in a highly regulatory, secure and audited environment.


Categories: Development

Keeping Cassandra Alive

Pythian Group - Mon, 2015-04-20 12:28

Troubleshooting Cassandra under pressure

This is the second blog post in the series. This is a bit more technical than the first one. I will explain some things that can be made to keep a cluster/server running when you are having problems in that cluster.

There were a lot of changes in Cassandra over the last 4 years (from 0.8 to 2.1), so I will refrain from discussing troubleshooting problems that affect some specific versions. Also, this is the kind of troubleshooting you need when you can’t “add a node”.

Why can’t I just add a node? Well, if you aren’t on vnodes, and you didn’t pre-calculate the token ranges, adding a node is a big effort. Other constrains may also apply, like budget or deliver time for hardware (if you are on bare metal). Plus, rack capacity, power constrains, etc…

Now you may say:

“Ok, we can’t add a node! What should we do?! We have a storm coming!”

So, I did navigate over that storm and it’s not an easy task, but it’s doable! First thing, you have to know what you have, that is critical! You also need to know where you can take more damage.

Let’s assume you have the following situation, and what I recommend for it:

  • Heavy Write Cluster, Low Read

Now let’s define “storm”: A storm is not when when Cassandra fails, it’s about an unanticipated load increase or a disaster. What happens is that you have more load than your planned capacity (Either because of failure of nodes or because of a sudden traffic increase). This will increase your resource usage to a point where your machines will start to die.

Let’s understand what can cause a Cassandra process to die, and a probably the machine (If you OOM and you didn’t configure swap… I warned you!) for the scenario described above.

  1. More data to the commitlog = more I/O pressure (Discard if you have commitlog on a different HDD)
  2. Data is written to memtables = Memory is used
  3. Memtables reach thresholds faster, get flushed to disk = I/O pressure
  4. Compaction starts faster and frequently = I/O pressure, CPU pressure
  5. Too many I/O compaction can’t compact fast enough and the memtables aren’t flushing fast enough = Memory not being released.
  6. Too much memory usage, JVM triggers GC more frequently = CPU pressure
  7. JVM can’t release memory = OOM
  8. OOM = PUM! Node dies (if you are “lucky” kernel will kill Cassandra)

And I didn’t go trough the hints that would be stored as nodes became unresponsive and send out once they get back online.

So now we know where our pain points are. Let’s understand them and see what we can do about it:

  • Commitlog – Let’s just assume you have this on separate HDD, and don’t do anything about it (after all it’s your safeguard).
  • Memtables – We can control how often they are flushed. It is a possible tweak point. Although it requires a Cassandra restart for the changes to produce an effect.
  • Compaction – This we can control via nodetool, inclusive we can disable it in the later versions.
  • JVM GC – We can change settings, but difficult to tweak and a restart is needed.
  • Swap – We can play a bit here if we do have a swap partition.
  • Dirty_ratio – How often the data is actually written to the HDD. This can put your data at risk, but also can help.
  • Replication Factor – this can be changed on the fly, will help by having less pressure on the nodes.

So, what do to? Where to start? It depends on a case by case scenario. I would probably make my Read performance suffer to keep the writes getting in. To allow that, the easiest way should be making the reads CL = ONE. That sometimes does look like the fast and easy option. But if you’re writes are not using Quorum or/and you have read_repair… You will spread more writes (And RF>1). I would pick compaction as my first target, you can always try to get it up to pace (re-enable, increase compaction throughput). Another option would be increase dirty_ratio and risk losing data (trusting the commitlogs + RF>1 helps not losing data) but this will give your HDD more room until the cluster recovers.

But every case is a case. I will talk about my own case, problems and resolutions this Wednesday at the Datastax Day in London! Fell free to join me!

 

Categories: DBA Blogs

Pillars of PowerShell: Debugging

Pythian Group - Mon, 2015-04-20 12:09
Introduction

The is the third blog post continuing the series on the Pillars of PowerShell. The first two post are:

  1. Interacting
  2. Commanding

We are going to take a bit of a jump and dig into a particular topic that I think is better to go over up front, instead of later. In this post I want to go over a few things of how you can debug scripts or just issues in PowerShell. This is a topic that can get very advanced and make for a very long blog post. In place of trying to put all that in one blog post, I have a few links that I am going to share at the end of this post that will point you to some of the more deep dive material on debugging.

Pillar 3: Debugging

When it comes to writing scripts or developing T-SQL procedures you will generally see folks use print statements to either check where the processing is at in the script, or output some “additional” information. PowerShell is no different and offers cmdlets that you can output it to with various destinations and to even use it to make a decision. One of the main ones I like to use when I write scripts is Write-Verbose. You may see some folks use Write-Host in their scripts, and all I can say to that is, “be kind to puppies”. The basic gist of it is Write-Host outputs plain text, and will always output text unless you comment it out or remove it from your script. In using Write-Verbose you can actually have that information only output when a parameter switch is used, rightly called “-verbose”. This switch is included in most built-in cmdlets for modules provided by Microsoft. If you want to include it in your script or function you simply need to include this at the top:

[CmdletBinding()]
Param()

So in the example below you can see that both functions will never output the Write-Verbose cmdlet when they are called:

p3_function_verbose_example

The difference you will see is that Test-NoVerbose does not do anything when you include the verbose switch, where Test-WithVerbose will:

p3_function_verbose_example_2

So in cases where other people may be using your scripts this feature will help keep output clean, unless you need it for debugging. I tend to use this most often when I am working on long scripts that I want to initially know what is going on as it runs. If I ever have to come back to the script for debugging I can just use the switch, versus the normal execution which doesn’t need all that output.

Errors

They are going to happen, it is inevitable in your scripting journey that at some point you are going to have an error. You cannot always prepare for every error but you can help in collecting as much information about an error to help in debugging. Just like you would handle errors in T-SQL using a TRY/CATCH block, you can do the same in PowerShell.

PowerShell offers a variable that is available in every session you open or run called $Error. [The dollar sign in PowerShell denotes a variable.] This variable holds records of the errors that have occurred in your session. This variable is going to hold those errors that can occur in your scripts. There are other errors or exceptions that can also be thrown by .NET objects that can work a bit different in how you capture them; I will refer you to Allen White’s post on Handling Errors in PowerShell to see a good example.

Summary

Debugging is one of those topics that can go into a 3-day course so one blog post is obviously not going to cover all the information you might need. I came across a good blog post by the PowerShell Team on Advanced Debugging in PowerShell that should be a read for anyone wanting to get involved with PowerShell scripting.

Categories: DBA Blogs

<span style="font-size: large;"

Oracle Infogram - Mon, 2015-04-20 11:06
Contributions by Angela Golla, Infogram Deputy Editor


How Cloud Computing is Revolutionizing Business
Cloud computing is triggering a stunning shift in how businesses operate. Modern SaaS applications for marketing, HR, and ERP are allowing companies to accelerate operations and engage more intimately with their customers thanks to heretofore unseen heroes in their ranks.  Read Mark Hurd's latest blog on "How Cloud Computing is Revolutionizing Business".

Manuals

Jonathan Lewis - Mon, 2015-04-20 09:24

From time to time I read a question (or, worse, an answer) on OTN and wonder how someone could have managed to misunderstand some fundamental feature of Oracle – and then, as I keep telling people everyone should do – I re-read the manuals and realise that that sometimes the manuals make it really easy to come to the wrong conclusion.

Having nothing exciting to do on the plane to Bucharest today, I decided it was time to read the Concepts manual again – 12c version – to remind myself of how much I’ve forgotten. Since I was reading the mobi version on an iPad mini I can’t quote page numbers, but at “location 9913 of 16157″ I found the following text in a sidebar:

“LGWR can write redo log entries to disk before a transaction commits. The redo entries become permanent only if the transaction later commits.”

Now I know what that’s trying to say because I already know how Oracle works – but it explains the various questions that I’ve seen on OTN (and elsewhere) struggling with the idea of how Oracle manages to “not have” redo for transactions that didn’t commit.

The redo entries become permanent the moment they are written to disc – nothing makes any of the content of the redo log files disappear 1, nothing goes back and flags some bits of the redo log as “not really there”. It’s the changes to the data blocks that have been described by the redo that become permanent only if the transaction later commits. If the transaction rolls back2 the session doesn’t “seek and destroy” the previous redo, it generates MORE redo (based on the descriptions that it originally put into the undo segment) and applies the changes described by that redo to reverse out the effects of the previous changes.

So next time you see a really bizarre question about how Oracle works remember that it could have arisen from someone reading the manual carefully; because sometimes the manual writers know exactly what they mean to say but don’t actually say it clearly and unambiguously.

1 I am aware that strange and rare events such disc crashes could make all sorts of things disappear, but I think it’s reasonable to assume here that we’re talking about standard processing mechanisms.

2 I am also aware that there are variations dependent on events like sessions being killed, or instance failure that could need some further explanation, but there’s a time, place, and pace, for everything.


APEX 5.0 Summer School 2015

Denes Kubicek - Mon, 2015-04-20 09:19
Es tut sich ganz schön was in der APEX Community dieses Jahr. APEX 5.0 wurde veröffentlich. APEX Connect im Juni findet statt und sollte bisher der größte APEX-Treffen im deutschprachigen Raum werden. Nun, wurde auch eine Reihe an Webinaren organisiert - APEX 5.0 Summer School - für alle, die keinen Urlaub machen oder für diejenigen, die den Urlaub langweilig finden. Meldet euch an. Die Anzahl der freien Plätze ist nicht unendlich.

Categories: Development

I’m Mark Heppell and this is how I work

Duncan Davies - Mon, 2015-04-20 09:00

Next up in the 2015 ‘How I work‘ series is Mark Heppell. Mark is one of Cedar’s longest serving employees having been with us for 16 years. As a consequence he’s been on almost all our client sites, so he’s one of our better known and most beloved consultants.

Mark is one of Cedar’s key PeopleSoft developers and is one of the first people that we all turn to when there’s something beyond our abilities. He’s currently waist deep in some great looking Fluid work (I’ve had a sneak peak).

Profile

Name: Mark Heppell

Occupation: Lead Technical Consultant
Location: Home office, Marlow, UK
Current computer: Dell Latitude E5520 laptop and Acer Aspire X3400 desktop
Current mobile devices: iPhone 4, iPad Air

What apps/software/tools can’t you live without?
I love any home automation gadgets.  I’ve got solar panels automatically turning on immersion heaters and everything in the lounge is controlled from my iPad.  It’s kind of an insurance policy, because without me, my wife can’t turn on the TV and would be sitting in darkness.

Besides your phone and computer, what gadget can’t you live without?
I’m not sure if it’s actually a gadget but if the broadband ever went down I think my teenage son’s world would end.

What’s your workspace like?
It’s good.  With Remote Desktop and Skype there’s very little need for me to leave my house.  If I get lonely I talk to the bonsai.

WorkPlace

What do you listen to while you work?
At the minute I’m listening to Imagine Dragons, Hozier and AWOLNATION, or I just put on the radio.

What PeopleSoft-related productivity apps do you use?
The usual PeopleSoft stuff, App Designer and SQL Developer.  Paint.net if I have to do any graphics, Notepad++ for html and trace files, FileZilla for Unix and ExamDiff for file comparisons.  With the advent of Fluid I seem to be spending more and more time looking at browser developer tools, generally Firebug for day to day stuff and Chrome for mobile device emulation.

Do you have a 2-line tip that some others might not know?
When creating javascript use something along the lines of
javascript:%SubmitScriptName(%FormName,..
or your javascript won’t work when the user clicks the “New Window” link.

What SQL/Code do you find yourself writing most often?
Working with HR,
Select * from PS_JOB J where J.EFFDT=(Select Max(J_ED.EFFDT) from ......,
will generally make an appearance a couple of times a week.

What everyday thing are you better at than anyone else?
Better at than anyone else is a bit of a stretch, but I do seem to end up with the more complicated/abstract client requests.

What’s the best advice you’ve ever received?
I’m sure my Dad gives me some wonderful advice, but he’s a Geordie and I can only understand 1 in 3 words he says.


Want to Outperform Your Competitors? 4 Ways to Serve Up a Cloud Advantage

Linda Fishman Hoyle - Mon, 2015-04-20 08:52

A Guest Post by Rod Johnson (pictured left), Group Vice President, Applications Business Group, Oracle

With more and more companies moving towards the cloud we wanted to find out how this cloud adoption impacted the success of these businesses. To do this, Oracle sponsored a global study, “Cloud Computing Comes of Age,” conducted by Harvard Business Review Analytic Services.

What did we find out?

  1. IT is not your opponent. IT and LOB share equal responsibility for cloud in high performing organizations.
  2. Move further, faster. Cloud leaders are not only more likely to use cloud across the top five functions (recruiting, marketing, sales, training, travel/expense management), but are also much more likely to be pushing cloud into more core business functions including procurement, supply chain and accounting.
  3. Be a role model. More than twice as many cloud leader respondents said that their CIOs had taken a leadership role in the move to the cloud compared to cloud novices (62 percent to 31 percent). These CIOs value the agility and efficiency cloud provides and they’ve made it a part of every conversation.
  4. Play to win. 72 percent of cloud leaders launched new products, 62 percent entered new markets, 55 percent expanded geographically, and 39 percent launched new business over the past 3 years.

The survey shows the clear business benefits of adopting cloud computing, but also highlights important insights for organizations that are looking to capitalize on the opportunities presented by the cloud.  IT and business leaders must work together to promote a more holistic cloud strategy if their organizations are to benefit from the next wave of cloud computing.

 Find out more about the study here.

APEX Summer School 2015 - Online Webinare

Dietmar Aust - Mon, 2015-04-20 08:38
Im Rahmen der APEX Summer School 2015 ( Twitter Handle: #apexsummer2015 ) werden wir 9 kostenlose Webinare zum Thema APEX 5 mit den Experten aus dem deutschsprachigen Raum durchführen:



Vielen Dank an Carsten Czarski für die Einladung und Organisation! Nicht zuletzt auch vielen Dank für die klasse Homepage des Webinars, alles 100% APEX 5 out-of-the-box! Ein einfaches und dennoch wirkungsvolles Beispiel dafür, was standardmäßig mit APEX 5 möglich ist:

APEX 5 ist seit letzter Woche produktiv verfügbar und kann hier zum heruntergeladen werden.

Es gibt noch weitere APEX Veranstaltungen, schaut doch mal rein.

Es wird ein heißer APEX - Sommer ;).



Viel Vergnügen,
~Dietmar.

Destroying The Moon

Scott Spendolini - Mon, 2015-04-20 08:18
Just under three years ago, I joined Enkitec when they acquired Sumneva.  The next three years brought a whirlwind of change and excitement - new products, additional training, and expanding the APEX practice from an almost nonexistent state to one of the best in the world.

Like all good things, that run has come to an end.  Last Friday was my final day at Accenture, and I am once again back in the arena of being self-employed.  Without any doubt, I am leaving behind some of the best minds in the Oracle community.  However, I am not leaving behind the new friendships that I have forged over the past three years.  Those will come with me and hopefully remain with me for many, many years to come.

Making the jump for the second time is not nearly as scary as it was the first time, but it's still an emotional move.  Specifically what's next for me?  That's a good questions, as the answer is not 100% clear yet.  There's a lot of possibilities, and hopefully things will be a lot more defined at the end of the week.

#letswreckthistogether

PeopleSoft's paths to the Cloud - Part III

Javier Delgado - Mon, 2015-04-20 06:49
In my previous posts on this series, I have covered how cloud computing could be used to reduce costs and maximize the flexibility of PeopleSoft Development and Production environments. In both cases, I focused on one specific area of cloud computing, Infrastructure as a Service (IaaS).

Today I will explain what kind of benefits can be expected by using another important area: Database as a Service (DBaaS). Instead of using an IaaS provisioned server to install and maintain your database, DBaaS providers take responsibility for installing and maintaining the database.

There are many players in this market, including Amazon, Microsoft and Oracle. The service features may differ, but in a nutshell, they normally offer these capabilities:

  • Backups: the database backups are automated, and you can decide to restore point-in-time backups at any moment. You can also decide when to take a snapshot of your database, which may be eventually be used to create another database instance (for example, to copy your Production database into the User Acceptance environment).
  • High Availability: while some IaaS provider do not support high-availability database solutions such as Oracle RAC (for instance, it is not supported by Amazon EC2), many DBaaS providers include high availability by default.
  • Contingency: some providers maintain a standby copy of your database in another data center. This allows you to quickly restore your system in the case the original data center's services are lost.
  • Patching: although you can decide when to apply a database patch, the DBaaS will do that for you. In many case, you can turn on automatic patching, in order to make sure your database engine is always up to date.
  • Monitoring: providers give the system administrators access to a management console, in which they can monitor the database behavior and add or remove resources as needed.
  • Notifications: in order to simplify the monitoring effort, you normally have the possibility of setting up notifications to be received by email and/or SMS upon a list of events, which may include CPU usage, storage availability, etc.

Under my point of view, these services offer significant advantages for PeopleSoft customers, particularly if your current architecture does not support all the previously mentioned services or you do not have the right DBA skills in-house. Even if your organization does not fall in these categories, the scalability and elasticity of DBaaS providers is very difficult to match by most internal IT organizations.

In any case, if you are interested in using Database as a Service for your PeopleSoft installation, make sure you correctly evaluate what each provider can give you.



Debugging PeopleSoft Absence Management Forecast

Javier Delgado - Mon, 2015-04-20 06:47
Forecasting is one of the most useful PeopleSoft Absence Management functionalities. It allows users to know which is going to be the resulting balance when entering an absence. The alternative is to wait until the Global Payroll calendar group is calculated, which naturally is far from being an online calculation.

Although this is a handy functionality, the calculation process does not always return the expected results. For some specific needs, the system element FCST ASOF DT, FCST BGN DT and FCST END DT may be needed. These elements are null for normal Global Payroll runs, so the formulas may behave differently in these runs than in the actual forecast execution. If you ever hit a calculation issue in the forecast process that cannot be solved by looking at the element definitions, you may be stuck.

When this type of issues are found in a normal Global Payroll execution, one handy functionality is to enable the Debug information and then review the Element Resolution Chain page. This page shows the step by step calculation of each element and it is particularly helpful in identifying how an element is calculated.

Unfortunately, this information is not available in the standard forecast functionality. Luckily, it can be enabled using a tiny customisation.

In PeopleSoft HCM 9.1, the forecast functionality is executed from two different places:

DERIVED_GP.FCST_PB.FieldFormula - Abs_ForecastSetup function
FUNCLIB_GP_ABS.FCST_PB.FieldFormula - Abs_ForecastExec function

In both PeopleCode events, you will find a sentence like this one:

SQLExec("INSERT INTO PS_GP_RUNCTL(OPRID, RUN_CNTL_ID, CAL_RUN_ID, TXN_ID, STRM_NUM, GROUP_LIST_ID, RUN_IDNT_IND, RUN_UNFREEZE_IND, RUN_CALC_IND, RUN_RECALC_ALL_IND, RUN_FREEZE_IND, SUSP_ACTIVE_IND, STOP_BULK_IND, RUN_FINAL_IND, RUN_CANCEL_IND, RUN_SUSPEND_IND, RUN_TRACE_OPTN, RUN_PHASE_OPTN, RUN_PHASE_STEP, IDNT_PGM_OPTN, NEXT_PGM, NEXT_STEP, NEXT_NUM, CANCEL_PGM_OPTN, NEXT_EMPLID, UPDATE_STATS_IND, LANGUAGE_CD, EXIT_POINT, SEQ_NUM5, UE_CHKPT_CH1, UE_CHKPT_CH2, UE_CHKPT_CH3, UE_CHKPT_DT1, UE_CHKPT_DT2, UE_CHKPT_DT3, UE_CHKPT_NUM1, UE_CHKPT_NUM2, UE_CHKPT_NUM3,PRC_NUM,OFF_CYCLE) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,%datein(:33),%datein(:34),%datein(:35),:36,:37,:38,:39,:40)", &OprID, &RunCntl_ID, &CalcRunId, &TxnID, 0, &SpaceFiller, "Y", "N", "Y", "N", "N", "N", &ApprByInd, "N", "N", "N", "N", &RunPhaseOptN, &RunPhaseStep, &SpaceFiller, &SpaceFiller, 0, 0, &SpaceFiller, &SpaceFiller, "N", "ENG", &SpaceFiller, 0, &SpaceFiller, &SpaceFiller, &SpaceFiller, "", "", "", 0, 0, 0, 0, "N");

You will notice that the RUN_TRACE_OPTN field is set to "N". If you use "A" instead as the trace option value, you will obtain the Element Resolution Chain:

SQLExec("INSERT INTO PS_GP_RUNCTL(OPRID, RUN_CNTL_ID, CAL_RUN_ID, TXN_ID, STRM_NUM, GROUP_LIST_ID, RUN_IDNT_IND, RUN_UNFREEZE_IND, RUN_CALC_IND, RUN_RECALC_ALL_IND, RUN_FREEZE_IND, SUSP_ACTIVE_IND, STOP_BULK_IND, RUN_FINAL_IND, RUN_CANCEL_IND, RUN_SUSPEND_IND, RUN_TRACE_OPTN, RUN_PHASE_OPTN, RUN_PHASE_STEP, IDNT_PGM_OPTN, NEXT_PGM, NEXT_STEP, NEXT_NUM, CANCEL_PGM_OPTN, NEXT_EMPLID, UPDATE_STATS_IND, LANGUAGE_CD, EXIT_POINT, SEQ_NUM5, UE_CHKPT_CH1, UE_CHKPT_CH2, UE_CHKPT_CH3, UE_CHKPT_DT1, UE_CHKPT_DT2, UE_CHKPT_DT3, UE_CHKPT_NUM1, UE_CHKPT_NUM2, UE_CHKPT_NUM3,PRC_NUM,OFF_CYCLE) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,%datein(:33),%datein(:34),%datein(:35),:36,:37,:38,:39,:40)", &OprID, &RunCntl_ID, &CalcRunId, &TxnID, 0, &SpaceFiller, "Y", "N", "Y", "N", "N", "N", &ApprByInd, "N", "N", "N", "A", &RunPhaseOptN, &RunPhaseStep, &SpaceFiller, &SpaceFiller, 0, 0, &SpaceFiller, &SpaceFiller, "N", "ENG", &SpaceFiller, 0, &SpaceFiller, &SpaceFiller, &SpaceFiller, "", "", "", 0, 0, 0, 0, "N");

By performing this change, you will notice that GP_AUDIT_TBL table starts to be populated with the Element Resolution Chain information. However, it may still not be visible from the page itself, because some tables are only populated temporarily in the forecast execution. In order to enable the access for the forecast runs, you will need to customise the GP_AUDIT_SEG_VW search record by adding the lines in italics to the SQL definition:

SELECT DISTINCT A.CAL_RUN_ID 
 , A.EMPLID 
 , A.EMPL_RCD 
 , A.GP_PAYGROUP 
 , A.CAL_ID 
 , A.ORIG_CAL_RUN_ID 
 , B.RSLT_SEG_NUM 
 , A.FICT_CAL_ID 
 , A.FICT_CAL_RUN_ID 
 , A.FICT_RSLT_SEG_NUM 
 , B.RSLT_VER_NUM 
 , B.RSLT_REV_NUM 
 , B.SEG_BGN_DT 
 , B.SEG_END_DT 
  FROM PS_GP_AUDIT_TBL A 
  , PS_GP_PYE_SEG_STAT B 
 WHERE A.CAL_RUN_ID = B.CAL_RUN_ID 
   AND A.EMPLID = B.EMPLID 
   AND A.EMPL_RCD = B.EMPL_RCD 
   AND A.GP_PAYGROUP = B.GP_PAYGROUP 
   AND A.CAL_ID = B.CAL_ID 
  UNION ALL 
 SELECT DISTINCT A.CAL_RUN_ID 
 , A.EMPLID 
 , A.EMPL_RCD 
 , A.GP_PAYGROUP 
 , A.CAL_ID 
 , A.ORIG_CAL_RUN_ID 
 , A.RSLT_SEG_NUM 
 , A.FICT_CAL_ID 
 , A.FICT_CAL_RUN_ID 
 , A.FICT_RSLT_SEG_NUM 
 , 1 
 , 1 
 , NULL 
 , NULL 
  FROM PS_GP_AUDIT_TBL A 
 WHERE NOT EXISTS ( 
 SELECT 'X' 
  FROM PS_GP_PYE_SEG_STAT B 
 WHERE A.CAL_RUN_ID = B.CAL_RUN_ID 
   AND A.EMPLID = B.EMPLID 
   AND A.EMPL_RCD = B.EMPL_RCD 
   AND A.GP_PAYGROUP = B.GP_PAYGROUP 
   AND A.CAL_ID = B.CAL_ID)

I hope you find this useful. Should you have any question or doubt, I will be happy to assist.

Note: Keep in mind that it is not a good idea to leave the Debug information enabled for Production environments, at least permanently. The time needed to run a forecast calculation with this type of information is significantly higher than without it. So, if you do not want to hit performance issues, my recommendation is to store in a table a flag indicating if the Element Resolution Chain for forecast should be enabled or not.


2U Learning Platform Update: Removal of Moodle, addition of accessibility options

Michael Feldstein - Mon, 2015-04-20 06:25

By Phil HillMore Posts (308)

2U has now been a public company for over a year, and that had what is easily the most successful education IPO in recent history. Shares have almost doubled from $13.00 at IPO to $25.50 last week. At the same time, there is a swirl of news around their new partner Yale and the Physician Assistant’s program – first the announcement of program from one of the elite of elite schools, second the news that accreditation approval for the new program is not going to be as easy as hoped.

While both aspects are newsworthy, I’d like to dive deeper into their infrastructure and learning platforms. The company is far from complacent, as they continue to make significant changes.

One emerging trend that both Michael and I have been covering is the growing idea that there are real benefits to be gained when pedagogy and platform are developed in parallel. From Michael’s intro to the Post-LMS series:

Reading Phil’s multiple reviews of Competency-Based Education (CBE) “LMSs”, one of the implications that jumps out at me is that we see a much more rapid and coherent progression of learning platform designs if you start with a particular pedagogical approach in mind. CBE is loosely tied to family of pedagogical methods, perhaps the most important of which at the moment is mastery learning. In contrast, questions about why general LMSs aren’t “better” beg the question, “Better for what?” Since conversations of LMS design are usually divorced from conversations of learning design, we end up pretending that the foundational design assumptions in an LMS are pedagogically neutral when they are actually assumptions based on traditional lecture/test pedagogy. I don’t know what a “better” LMS looks like, but I am starting to get a sense of what an LMS that is better for CBE looks like. In some ways, the relationship between platform and pedagogy is similar to the relationship former Apple luminary Alan Kay claimed between software and hardware: “People who are really serious about software should make their own hardware.” It’s hard to separate serious digital learning design from digital learning platform design (or, for that matter, from physical classroom design). The advances in CBE platforms are a case in point.

2U is following the same concept. Their pedagogy is based on small discussion sections (they boast an average class size of ~11 students) within masters level programs, combining synchronous discussions using a Brady Bunch approach.

Live Courses

They also use a Bi-directional Learning Tool (BLT). The following video references the ill-fated Semester Online program, but the tool applies to all their customers.

2U’s approach also adds in custom-developed video segments that act as case studies.

Learning Platform Keeps Connect, Removes Moodle

Initially 2U patched together Moodle as an LMS and Adobe Connect as web conferencing for the video sessions, developing custom tools and applications to tie it all together. In additional to the learning platforms used within the courses, 2U also developed custom enrollment projections, marketing, support and application services, but in this post I’m going to focus on the learning components.

In an interview with James Kenigsberg, CTO, and Rob Cohen, President & COO, they described the rationale for the recent changes as architectural in nature – moving to a more modular approach and improving reliability. James and Rob said that their learning platforms are absolutely a pairing of technology and pedagogy. In their term, agnostic platforms don’t accomplish much.

James described their origins of using Moodle with belief that it’s “OK to start with a bowl of spaghetti code if you understand what you want”, and that this is their second refactoring of code in the past six years. They had already heavily customized the Moodle code, but now 2U will have all Moodle components out of the platform by the end of CY 2015. In their description Moodle was great to start with as the base, but now they need a different approach.

2U relies heavily on Adobe Connect, with access to video tools and rooms available throughout the overall learning platform. The rationale for Adobe Connect (vs. Blackboard Collaborate, for example) in that Connect provides a persistent “room” for each faculty member, allowing them to customize, add their own content & quizzes, setup of polls, and general configuration[1]. This room is then available to them through their courses. Other tools tend to have separate meeting instances, such that the content and configuration and setup but no longer available after the meeting. For general configuration of the room, faculty members using 2U’s platform can make choices such as only allowing students to speak in virtual room as they raise their hand vs. initiating everyone to talk on unmute.

For the technology stack, 2U is based on Amazon Web Services (AWS) with files saved to Amazon’s S3 file system. The BLT is built on Angular JS.

Accessibility

2U has also taken advantage of the combined platform + pedagogy approach to make some improvements in accessibility as well. For this area, the benefit is more from combining platform and content than pure pedagogy, however.

For sight-impaired students, there is already compatibility with screen readers such as JAWS, but there is a new audio-overlay feature that is interesting. For the case study videos, 2U enables an option for students to hear a narrated audio track in parallel to the recorded video’s playback. For example, in this video from the social work program at USC, the Abby character is talking to a social worker. The audio track option adds descriptions to give the video context for sight-impaired, such as:

Later, Abby rushes into Carol’s office. [dialogue] Abby sits down. [dialogue]

Abby pre flashback

During one transition, Abby describes her memories from childhood, and the audio overlay describes.

In a flashback, ten year old Abby lies across her bed doing homework. Fran looks in. [dialogue] Abby sits up and gathers her books. [dialogue]

Abby flashback

This tight integration works because the same people working on the platform are also working on the course material.

For hearing-impaired students, 2U has added two different transcript capabilities. One choice is having full transcript below video[2].

Full

Another choice is to overlay the transcript as in closed-caption style.

Overlay

As there are more efforts to create online courses and programs, the topic of accessibility is becoming more important. Just this month, edX settled with the Department of Justice while there are lawsuits against Harvard and MIT for their usage of the platform.

EdX, an online learning platform that Harvard co-founded with MIT in 2012, entered into a settlement agreement with the Department of Justice on Thursday and will address alleged violations of the Americans with Disabilities Act. That settlement could come to bear on a separate but similar lawsuit against Harvard that revolves around issues of accessibility online.

Namely, the edX settlement will require the platform to become accessible for people with disabilities—including those who are deaf or visually impaired. [snip]

The settlement comes as the National Association of the Deaf sues Harvard and MIT for allegedly discriminating against the deaf and hard of hearing by not providing online captioning both for the courses they offer through edX and the rest of their online content. The private lawsuit, filed in February, accuses the University of violating both the American with Disabilities Act and the Rehabilitation Act, which requires that educational institutions that receive federal funding provide equal access to disabled individuals. Legal experts have said that the suits against Harvard and MIT has merit.

This challenge of supporting students with disabilities within online courses has been a difficult one to solve, particularly as real solutions require both the platform to have generic capabilities, the content (often created by individual faculty on their own prerogative) to follow appropriate guidelines, and for the addition of transcripts / captions and audio.

2U has the benefit of being directly involved in all three areas and by having their learning platforms designed and customized for their specific pedagogical approach.

Standing Apart in Crowded Market

2U’s approach is unique in the crowded market of Online Service Providers, or “enablers”. 2U is vertically integrated and focused on niche programs – high-tuition masters programs at elite institutions. Most of the competition – Pearson EmbaNet, Wiley Deltak, LearningHouse, Academic Partnerships, etc – are going in different directions that include broad offerings (masters, bachelors, broad range of pedagogy).

I was a little late in covering 2U, largely because of my discomfort with two interdependent aspects of their business:

Furthermore, this vertically-integrated company goes against much of the movement towards interoperability and breaking down walled gardens. But the company is growing and seems to be quite successful, and I do like the strong focus on academic quality and student support. It is worth understanding how this tight combination of platform and pedagogy within the company plays out.

  1. Note: I believe that Bb Collaborate has an option for persistent faculty sessions, but the core design is based on events.
  2. In both cases I’m showing the mouse hover to also show the platform selection tool.

The post 2U Learning Platform Update: Removal of Moodle, addition of accessibility options appeared first on e-Literate.

Creating Sales Cloud Opportunity

Angelo Santagata - Mon, 2015-04-20 06:02

Common Payload creating opportunities

  <createOpportunity>
    <opportunity>
      <ChildRevenue>
        <ProdGroupId>300000000537006</ProdGroupId>
        <RevnAmount>45000.0</RevnAmount>
        <ResourcePartyId>300000000519815</ResourcePartyId>
      </ChildRevenue>
      <SalesStageId>300000000157471</SalesStageId>
      <Comments>Provide training to 250 salespersons and support staff</Comments>
      <EffectiveDate>2012-09-30</EffectiveDate>
      <WinProb>5.0</WinProb>
      <Name>New Sales Training</Name>
      <OptyCreationDate>2012-08-27T00:00:00.000</OptyCreationDate>
      <TargetPartyId>300000001025130</TargetPartyId>
      <OwnerResourcePartyId>300000000519815</OwnerResourcePartyId>
      <OpportunityResource>
        <ResourceId>300000000519815</ResourceId>
        <OwnerFlag>true</OwnerFlag>
      </OpportunityResource>
    </opportunity>
  </createOpportunity>



 


Sample Java Code

1. Generate Proxy using Java Tooling (like JDeveloper)
2. Java Code Snippet

public static void main(String[] args) {

        // Default Values
        String username = "matt.hooper";
        String password = "somepassword";
        String url="https://<yourhost>/opptyMgmtOpportunities/OpportunityService?WSDL";

        // Setup the webservice interface
        OpportunityService_Service opportunityService_Service = new opportunityService_Service();
        OpportunityService opportunityService = opportunityService_Service.getopportunityServiceSoapHttpPort(securityFeature);
        // Get the request context to set the outgoing addressing properties
        WSBindingProvider wsbp = (WSBindingProvider)opportunityService;
        Map<String, Object> requestContext = wsbp.getRequestContext();
        requestContext.put(WSBindingProvider.USERNAME_PROPERTY, username);
        requestContext.put(WSBindingProvider.PASSWORD_PROPERTY, password);
        requestContext.put(WSBindingProvider.ENDPOINT_ADDRESS_PROPERTY, url);

        System.out.println("Example of creating an opportunity ");

        // Create Payload        
        ObjectFactory factory = new ObjectFactory();
        opportunity newopportunity=factory.createopportunity();
        newOpportunity.setName("Name of Opportunity");
        // Set other values
        //
        Opportunity result=opportunity.createOpportunity();
        // and so on

 




Function-Based Indexes And CURSOR_SHARING = FORCE

Randolf Geist - Mon, 2015-04-20 02:00
In general it is known that Function-Based Indexes (FBIs) can no longer be used by the optimizer if the expression contains literals and CURSOR_SHARING = FORCE / SIMILAR (deprecated) turns those literals into bind variables. Jonathan Lewis described the issue quite a while ago here in detail.

In a recent OTN thread this issue was raised again, but to my surprise when I played around with a test case that mimicked the OP's problem query I found that (certain) Oracle versions have some built-in logic that enable FBI usage for certain cases where you would expect them to be not usable.

If you test the following code on versions from 10.2.0.4 (possibly earlier) up to and including version 11.2.0.3 then you'll notice some interesting details:


create table t
as
select * from all_objects;

create index t_idx on t (owner || ' ' || object_name);

exec dbms_stats.gather_table_stats(null, 't')

set echo on linesize 200 pagesize 0

alter session set cursor_sharing = force;

select /*+ full(t) */ * from t where owner || ' ' || object_name = 'BLA';

select * from table(dbms_xplan.display_cursor);

select /*+ index(t) */ * from t where owner || ' ' || object_name = 'BLA';

select * from table(dbms_xplan.display_cursor);

select /*+ index(t) */ * from t where owner || 'A' || object_name = 'BLA';

select * from table(dbms_xplan.display_cursor);
Here is the relevant output I got from 11.2.0.1 for example:

SQL> alter session set cursor_sharing = force;

Session altered.

SQL>
SQL> select /*+ full(t) */ * from t where owner || ' ' || object_name = 'BLA';

no rows selected

SQL>
SQL> select * from table(dbms_xplan.display_cursor);
SQL_ID ar3tw7r1rvawk, child number 0
-------------------------------------
select /*+ full(t) */ * from t where owner || :"SYS_B_0" || object_name
= :"SYS_B_1"

Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 284 (100)| |
|* 1 | TABLE ACCESS FULL| T | 1 | 117 | 284 (2)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"||' '||"OBJECT_NAME"=:SYS_B_1)


19 rows selected.

SQL>
SQL> select /*+ index(t) */ * from t where owner || ' ' || object_name = 'BLA';

no rows selected

SQL>
SQL> select * from table(dbms_xplan.display_cursor);
SQL_ID 6kzz3vw5x8x3b, child number 0
-------------------------------------
select /*+ index(t) */ * from t where owner || :"SYS_B_0" ||
object_name = :"SYS_B_1"

Plan hash value: 470836197

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 4 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 117 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T_IDX | 1 | | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("T"."SYS_NC00016$"=:SYS_B_1)


20 rows selected.

SQL>
SQL> select /*+ index(t) */ * from t where owner || 'A' || object_name = 'BLA';

no rows selected

SQL>
SQL> select * from table(dbms_xplan.display_cursor);
SQL_ID 6kzz3vw5x8x3b, child number 1
-------------------------------------
select /*+ index(t) */ * from t where owner || :"SYS_B_0" ||
object_name = :"SYS_B_1"

Plan hash value: 3778778741

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 52472 (100)| |
|* 1 | TABLE ACCESS BY INDEX ROWID| T | 724 | 84708 | 52472 (1)| 00:10:30 |
| 2 | INDEX FULL SCAN | T_IDX | 72351 | | 420 (1)| 00:00:06 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"||:SYS_B_0||"OBJECT_NAME"=:SYS_B_1)


20 rows selected.
Looking at the statement text that results from "CURSOR_SHARING = force" we can spot the expected bind variables instead of the literals, and this should result in a corresponding predicate that doesn't match the FBI expression. However, when looking at the filter expression in the predicate section (when forcing a full table scan) we can spot something interesting: It still shows the literal, which doesn't correspond to the predicate of the rewritten query text.

The next execution shows that the FBI really can be used despite the bind variable replacement taking place, and the final execution shows that the cursor sharing works correctly in that sense that a new child cursor got created for the same SQL text with a different plan and different predicate section when using a different literal in the original SQL text. V$SQL_SHARED_CURSOR shows "HASH_MATCH_FAILED" which is described as "No existing child cursors have the unsafe literal bind hash values required by the current cursor", which makes sense and probably means that the corresponding bind variable is marked as "unsafe" internally.

This optimisation shows only up if there is a suitable FBI - if there's no corresponding expression the SQL text and predicate section match. Furthermore it only supports certain expressions - Jonathan's example shows that in general it's true that these rewrites prevent FBI usage. And obviously it ceases to work in 11.2.0.4 and 12c. Whether this is a bug or a feature I don't know, but since it only seems to apply to certain expressions it's probably not that relevant anyway.

As Jonathan points out in his note you can always work around the general problem by hiding the expression in a view, and since 11g of course a proper virtual column definition is the better approach, which doesn't expose this problem either.

Even better would be the proper usage of bind variables and not using forced cursor sharing, but there are still many installations out there that rely on that feature.

SQL Server - Change Management: list all updates

Yann Neuhaus - Mon, 2015-04-20 01:29

I am looking to have all SQL Server updates on a server including Service Packs, Cumulative Updates and other fixes like we can see in the uninstall panel from Windows.

installed-update.png