Scott Spendolini

Subscribe to Scott Spendolini feed
Mostly Oracle APEX. Mostly.Scotthttp://www.blogger.com/profile/01187435106015051061noreply@blogger.comBlogger224125
Updated: 16 hours 5 min ago

My "Must See" ADF/MAF Sessions at KScope 16

Thu, 2016-05-26 07:00
Yes, you read that right - it's not a typo, nor did one of my kids or wife gain access to my laptop.  It's part of a "blog hop" - where a number of experts made recommendations about KScope sessions that are "must attend" and are not in their core technology.  I picked ADF/MAF, as I don't have any practical experience in either technology, but they are at least similar enough that I would not be totally lost.

In any case, the following sessions in the ADF/MAF track are worth checking out at Kscope 16 this year:

How to Use Oracle ALTA UI to Create a Smashing UI for Web and Mobile
Luc Bors, eProseed NL
When: Jun 28, 2016, Session 12, 4:45 pm - 5:45 pm

I've always liked UI, and Oracle ALTA is a new set of templates that we'll be seeing quite a bit of across a number of new technologies.

Three's Company: Going Mobile with Oracle APEX, Oracle MAF, and Oracle MCS
Frederic Desbiens , Oracle Corporation
When: Jun 27, 2016, Session 6, 4:30 pm - 5:30 pm

I'll admit - anytime there's a comparison of APEX and other similar technologies, it's always interesting to witness the discussion.  If nothing else, there will be a good healthy debate as a result of this session!

Introduction to Oracle JET: JavaScript Extension Toolkit
Shay Shmeltzer, Oracle Corporation
When: Jun 28, 2016, Session 7, 8:30 am - 9:30 am

Oracle JET is a lot more than just charts, and there's a lot of momentum behind this technology.  I'm very interested to learn more and perhaps even see a thing or two that you can do with it, as well as the various integration points that are possible with other technologies.

Build a Mobile App in 60 Minutes with MAF
John King , King Training Resources
When: Jun 27, 2016, Session 5, 3:15 pm - 4:15 pm

Native mobile applications are something that APEX doesn't do, so it would be nice to see how this would be possible, should the need ever arise.

Thanks for attending this ODTUG blog hop! Looking for some other juicy cross-track sessions to make your Kscope16 experience more educational? Check out the following session recommendations from fellow experts!

Stinkin' Badges

Thu, 2016-01-28 07:55
Ever since APEX 5, the poor Navigation Bar has taken a back seat to the Navigation Menu. And for good reason, as the Navigation Menu offers a much more intuitive and flexible way to provide site-wide navigation that looks great, is responsive and just plain works. However, the Navigation Bar can and does still serve a purpose. Most application still use it to display the Logout link and perhaps the name of the currently signed on user. Some applications use it to also provide a link to a user's profile or something similar.

Another use for the Navigation Bar is to present simple metrics via badges. You've seen the before: the little red numbered icons that hover in the upper-right corner of an iPhone or Mac application, indicating that there's something that needs attention. Whether you consider them annoying or helpful, truth be told, they are a simple, minimalistic way to convey that something needs attention.

Fortunately, adding a badge to a Navigation Bar entry in the Universal Theme in APEX 5 is tremendously simple. In fact, it's almost too simple! Here's what you need to do:
First, navigate to the Shared Components of your application and select Navigation Bar List. From there, click Desktop Navigation Bar. There will likely only be one entry there: Log Out.

2016 01 28 08 40 05

Click Create List Entry to get started. Give the new entry a List Entry Label and make sure that the sequence number is lower than the Log Out link. This will ensure that your badged item displays to the left of the Log Out link. Optionally add a Target page. Ideally, this will be a modal page that will pop open from any page. This page can show the summary of whatever the badge is conveying. Next, scroll down to the User Defined Attributes section. Enter the value that you want the badge to display in the first (1.) field. Ideally, you should use an Application or Page Item here with this notation: &ITEM_NAME. But for simplicity's sake, it's OK to enter a value outright.
Run your application, and have a look:

2016 01 28 08 48 45

Not bad for almost no work. But we can make it a little better. You can control the color of the badge with a single line of CSS, which can easily be dropped in the CSS section of Theme Roller. Since most badges are red, let's make ours red as well. Run your application and Open Theme Roller and scroll to the bottom of the options. Expand the Custom CSS region and enter the following text:

.t-Button--navBar .t-Button-badge { background-color: red;}

Save your customizations, and note that the badge should now be red:

2016 01 28 08 49 49

Repeat for each metric that you want to display in your Navigation Bar.

Formatting a Download Link

Fri, 2016-01-22 14:38
Providing file upload and download capabilities has been native functionality in APEX for a couple major releases now. In 5.0, it's even more streamlined and 100% declarative.
In the interest of saving screen real estate, I wanted to represent the download link in an IR with an icon - specifically fa-download. This is a simple task to achieve - edit the column and set the Download Text to this:
<i class="fa fa-lg fa-download"></i>
The fa-lg will make the icon a bit larger, and is not required. Now, instead of a "download" link, you'll see the icon rendered in each row. Clicking on the icon will download the corresponding file. However, when you hover over the icon, instead of getting the standard text, it displays this:
2016 01 13 16 28 16
Clearly not optimal, and very uninformative. Let's fix this with a quick Dynamic Action. I placed mine on the global page, as this application has several places where it can download files. You can do the same or simply put on the page that needs it.
The Dynamic Action will fire on Page Load, and has one true action - a small JavaScript snippet:
$(".fa-download").prop('title','Download File');
This will find any instance of fa-download and replace the title with the text "Download File":
2016 01 13 16 28 43
If you're using a different icon for your download, or want it to say something different, then be sure to alter the code accordingly.

Conference Season

Mon, 2016-01-11 21:56
It's conference season!  That means time to start looking at flights and hotels and ensuring that while I'm on the road, my wife is not at work (no easy task).  In addition to many of the conferences that I've been presenting at for years, I have a couple of new additions to the list.

Here it is:

RMOUG - Denver, CO
One of the larger conferences, the year usually starts out in Denver for me, where crowds are always large and appreciative.  RMOUG has some of the most dedicated volunteers and puts on a great conference year after year.

GAOUG - Atlanta, GA
This will be my first time at GAOUG, and I'm excited to help them get their annual conference started.  Lots of familiar faces will be in attendance.  At only $150, if you near the Atlanta drive, it's worth checking out.

OCOJ - Williamsburg, VA (submitted)
This will (hopefully) also be my first Oracle Conference on the James.  Held in historic Williamsburg, OCOJ is also a steal at just $99.

UTOUG - Salt Lake City, UT
I'll head back out west to Utah for UTOUG.  Always good to catch up with the local Oracle community in Utah each year.  Plus, I make my annual SLC brewery tour while there.

GLOC - Cleveland, OH (submitted)
Steadily growing in popularity, the folks at GLOC put on an excellent conference.  Waiting to hear back on whether my sessions got accepted.

KSCOPE - Chicago, IL
Like everyone, I'm looking forward to one of the best annual technical conferences that I've regularly attended.  In addition to the traditional APEX content, there's few surprises planned this year!

ECO - Raleigh/Durham, NC (planning on submitting)
ECO - formerly VOUG - is also growing in numbers each year.  There's a lot of tech in the RDU area, and many of the talented locals present here.  Bonus: Jeff Smith curated brewery/bar tour the night before.

OOW - San Francisco, CA (planning on submitting)
As always, the conference year typically ends with the biggest one - Oracle Open World.  While there's not as much APEX content as there once way, it's always been more focused on the marketing side of technology, which is good to hear every now and then.



User group conferences are one of the best types of training available, especially since they typically cost just a couple hundred dollars.  I encourage you to try to check out one near you.  Smaller groups are also great places to get an opportunity to present.  In addition to annual conferences, many smaller groups meet monthly or quarterly and are always on the look out for new content.

Refreshing PL/SQL Regions in APEX

Tue, 2015-11-10 08:03

If you've been using APEX long enough, you've probably used a PL/SQL Region to render some sort of HTML that the APEX built-in components simply can't handle. Perhaps a complex chart or region that has a lot of custom content and/or layout. While best practices may be to use an APEX component, or if not, build a plugin, we all know that sometimes reality doesn't give us that kind of time or flexibility.

While the PL/SQL Region is quite powerful, it still lacks a key feature: the ability to be refreshed by a Dynamic Action. This is true even in APEX 5. Fortunately, there's a simple workaround that only requires a small change to your code: change your procedure to a function and call it from a Classic Report region.

In changing your procedure to a function, you'll likely only need to make one type of change: converting and htp.prn calls to instead populate and return a variable at the end of the function. Most, if not all of the rest of the code can remain untouched.

Here's a very simple example:

Before:

PROCEDURE print_region
(p_item IN VARCHAR2)
IS
BEGIN
htp.prn('This is the value: ' || p_item);
END;

After:

FUNCTION print_region
(p_item IN VARCHAR2)
RETURN VARCHAR2
IS
l_html VARCHAR2(100);
BEGIN
l_html := 'This is the value: ' || p_item;
RETURN l_html;
END;

On the APEX side, simply create a Classic Report and set the query to something like this that refers to your function:

SELECT package_name.function_name(p_item => :P1_ITEM) result FROM dual
You'll then want to edit the Attributes of the Classic Report and turn off Pagination, set the Headings type to None and ensure Partial Page Refresh is enabled. Next, click on the Template Options and Disable Alternating Rows and Row Highlighting and then check Stretch Report.

2015 11 10 08 56 05

Make any other UI tweaks that you need, and you should now have a Dynamic PL/SQL Region that can be refreshed in a Dynamic Action.

APEX 5 Cheat Sheet

Mon, 2015-11-09 21:18
On Twitter today, Jeff Smith tweeted about a new SQL Developer cheat sheet that he created with a site called cheatography.com.
Not to be outdone, I created a cheat sheet for the APEX 5 Keyboard Shortcuts. Not only can you view it online, but you can also download a PDF version of it. Check it out and let me know if there's anything that you'd like to see added.

Hide and Seek

Tue, 2015-11-03 14:30

In migrating SERT from 4.2 to 5.0, there's a number of challenges that I'm facing. This has to do with the fact that I am also migrating a custom theme to the Universal Theme, as almost 100% of the application just worked if I chose to leave it alone. I didn't. More on that journey in a longer post later.

In any case, some of the IR filters that I have on by default can get a bit... ugly. Even in the Universal Theme:

2015 11 03 15 25 18

In APEX 4.2, you could click on the little arrow, and it would collapse the region entirely, leaving only a small trace that there's a filter. That's no longer the case:

2015 11 03 15 25 31

So what to do... Enter CSS & the Universal Theme.

Simply edit the page and add the following to the Inline CSS region (or add the CSS to the Theme Roller if you want this change to impact all IRs):

.a-IRR-reportSummary-item { display: none; }

This will cause most of the region to simply not display at all - until you click on the small triangle icon, which will expand the entire set of filters for the IR. Clicking it again makes it go away. Problem solved with literally three words (and some punctuation).

Universal Theme Face Lift

Fri, 2015-10-23 09:42
I'm a huge fan of APEX's new Universal Theme, and have been working quite a bit with it.  One of the coolest features is how easy it is to change the colors.  You don't even need to be good at design - just click Theme Roller, and spin all the things!

However, as much as you change the colors, the look and feel still largely looks the same, since the base font is unchanged.

So let's change it up! More importantly, let's change it up without making any changes to the Universal Theme itself, so that when we upgrade to APEX 5.1, our changes will be preserved.

First, head on over to Google Fonts (https://www.google.com/fonts) and pick a font to use as your new base font.  It doesn't really matter which one you use.  For this example, I’m going to use Montserrat.  Once you've chosen which font to use, click on the Quick Use icon.  This will render a page with a number of different options as to how to include the font in your application.

Select which styles of the font that you want to include.  Some fonts will have bold and italic; others will not, so make sure the font you select also have the styles that you want, too.

2015 10 23 10 20 56

Next, pick the character set(s) that you want to include.  My choice was pretty simple.

2015 10 23 10 21 09

Since there’s no “APEX” tab, we’re going to have to make do with the @import tab.  You’ll want to copy just the URL portion of the snippet.  So in this example, it would be: https://fonts.googleapis.com/css?family=Montserrat

2015 10 23 10 21 25

Lastly, we’ll also need to copy the font-family name, as we’ll use that in Theme Roller.  For this example, we would only need Montserrat

2015 10 23 10 21 35

Now that we have all of the details from Google Fonts, head on over to APEX.  First, edit your application’s Shared Components and navigate to User Interface Attributes and edit the DESKTOP UI.  In the Cascading Style Sheets section, paste the URL that you copied from Step 3 of the Google Fonts page into the File URLs region.

2015 10 23 10 33 42

Scroll to the top and click Apply Changes.

Next, run your application and open up Theme Roller by clicking on the link in the developer toolbar.  Once Theme Roller opens up, expand the Custom CSS region and paste the following code there, replacing Montserrat with your font-family name defined in Step 4 of the Google Fonts page:
body {
font-family: 'Montserrat', sans-serif;
font-weight: 300;
line-height: 25px;
font-size: 14px;
}
Save your changes, and notice that the entire application should be using your new font!  Don’t like how it looks?  Go pick a different font and see if that helps; or simply remove the Custom CSS and File URL to revert to the default one.

Next Oracle APEX NOVA Meetup Date Set

Mon, 2015-10-12 15:26
The next Oracle APEX NOVA MeetUp is going to be held on November 12th, 2015 at 7PM.  We decided to mix things up a bit and are going to have it at Vapianos in the Reston Town Center.  We're also going to try a more informal agenda.  In other words, there will be no agenda.

So if you're around Reston on November 12th from 7-9PM (or so), feel free to stop by.  Here's the MeetUp.com link: http://www.meetup.com/orclapex-NOVA/events/226009784/

Drop It Like It's Not

Thu, 2015-09-17 09:50
I just ran the following script:

-- TABLES
FOR x IN (SELECT table_name FROM user_tables)
LOOP
  EXECUTE IMMEDIATE('DROP TABLE ' || x.table_name || ' CASCADE CONSTRAINTS');
END LOOP;

-- SEQUENCES
FOR x IN (SELECT sequence_name FROM user_sequences)
LOOP
  EXECUTE IMMEDIATE ('DROP SEQUENCE ' || x.sequence_name);
END LOOP;

-- VIEWS
FOR x IN (SELECT view_name FROM user_views)
LOOP
  EXECUTE IMMEDIATE ('DROP VIEW ' || x.view_name);
END LOOP;

Basically, drop all tables, views and sequences.  It worked great, cleaning out those objects in my schema without touching any packages, producers or functions.  The was just one problem:  I ran it in the wrong schema.

Maybe I didn't have enough coffee, or maybe I just wasn't paying attention, but I essentially wiped out a schema that I really would rather not have.  But I didn't even flinch, and here's why.

All tables & views were safely stored in my data model.  All sequences and triggers (and packages, procedures and functions) were safely stored in scripts.  And both the data model and associated scripts were safely checked in to version control.  So re-instantating this project was a mere inconvenience that took no more than the time it takes to drink a cup of coffee - something I clearly should have done more of earlier this morning.

Point here is simple: take the extra time to create a data model and a version control repository for your projects - and then make sure to use them!  I religiously check in code and then make sure that at least my TRUNK is backed up elsewhere.  Worst case for me, I'd lose a couple of hours or work, perhaps even less, which is far better than the alternative.

Sumner Technologies: Take Two

Mon, 2015-06-01 11:44

About a month ago, I left my position at Accenture Enkitec Group. I had a couple of ideas as to what I wanted to do next, but nothing was 100% solid.  After considering a couple of different options, I'm happy to announce that together with Doug Gault & Tim St. Hilaire, we're re-launching Sumner Technologies.

Much like last time, the focus will be on Oracle APEX; but we’re going to refine that focus a little bit.  In addition to traditional consulting, we’re going to focus more on higher-level services, such as security reviews and APEX health checks, as well as produce a library of on-demand training content.  APEX has matured tremendously over the past few years, and we feel that these services will complement the needs of the marketplace.

It’s exciting to be starting things over, so to speak.  Lots will be the same, but even more will be different.  There’s a lot of work to be done (yes, I know the site is not in APEX - yet), but we’re excited at the potential of what we’re going to offer APEX customers, as the APEX marketplace is not only more mature, but it’s also grown and will continue to do so.

Feel free to check out what we’re up to on Facebook, Twitter, LinkedIn and our website.  Or find us at KScope in a couple of weeks!

Destroying The Moon

Mon, 2015-04-20 09:18
Just under three years ago, I joined Enkitec when they acquired Sumneva.  The next three years brought a whirlwind of change and excitement - new products, additional training, and expanding the APEX practice from an almost nonexistent state to one of the best in the world.

Like all good things, that run has come to an end.  Last Friday was my final day at Accenture, and I am once again back in the arena of being self-employed.  Without any doubt, I am leaving behind some of the best minds in the Oracle community.  However, I am not leaving behind the new friendships that I have forged over the past three years.  Those will come with me and hopefully remain with me for many, many years to come.

Making the jump for the second time is not nearly as scary as it was the first time, but it's still an emotional move.  Specifically what's next for me?  That's a good questions, as the answer is not 100% clear yet.  There's a lot of possibilities, and hopefully things will be a lot more defined at the end of the week.

#letswreckthistogether

Little League, Big Data

Tue, 2015-03-03 13:36
Last week, I participated in my first Little League draft for my son's baseball team.  This was new territory, as up until now, play has been non-competitive.  This year we will actually have to keep score, and there will be winners and losers.

In preparation for the draft, we had tryouts a few weeks ago where we evaluated the kids on a number of different criteria.  Never have I seen so many scared 7 and 8 year olds march through the cages as dozens of coaches with clipboards watched and recorded their every move.  I camped out and watched them pitch, as from what many veteran coaches told me, the key to keeping the game moving along is the pitcher.

In preparation for the draft, we were sent a couple of key spreadsheets.  The first one had an average rating of all of the kids tryouts assessments, done by the board members.  The second one contained coaches evaluations for some of the players from past seasons. Lots and lots of nothing more than raw data.

Time to fire up APEX.  I created a workspace on my laptop, as I was not sure if we would have WiFi at the draft.  From there, I imported both spreadsheets into tables, and got to work on creating a common key.  Luckily, the combination of first and last name produced no duplicates, so it was pretty easy to link the two tables.  Next, I created a simple IR based on the EVALS table - which was the master.  This report showed all of the tryout scores, and also ranked each player based on the total score.

Upon editing a row in EVALS, I had a second report that showed a summary of the coach's evaluation from prior seasons.  I could also make edits to the EVALS table, such as identify players that I was interested in, players that were already drafted, and any other comments that I wanted to track.

After about 20 minutes of reviewing the data, I noticed something.  I was using data collected while the player was under a lot of stress.  The data set was also small, as each player only got 5 pitches, 5 catches, 5 throws, etc.  The better indicator as to a player's talents was in the coach's evaluations, as that represents an entire season of interaction with the player, not just a 3-4 minute period.

Based on this, I was quickly able to change my IR on the first page to also include a summary of the coach's evaluations alongside the tryout evaluations.  I sorted my report based on that, and got a very different order.  This was the order that I was going to go with for my picks.

Once the draft started, it was very easy to mark each player as drafted, so that any drafted player would no longer show up in the report.  It was also trivial to toggle the "must draft" column on and off, ensuring that if there were any younger players that I wanted, I could get them in the early rounds before we had to only draft older players.

Each time it was my pick, I already knew which player that I was going to draft.  Meanwhile, the other coaches shuffled stacks of marked up papers and attempted to navigate multiple spreadsheets when it was theirs.  Even the coordinator commented on how I was always ready and kept things moving along.

Unless you're some sort of youth athletics coach that does a draft, this application will likely do you little good.  But the concept can go a long way.  In almost any role in any organization, you likely have data for something scattered across a few different sources or spreadsheets.  This data, when isolated, only paints a blurry part of the whole picture.  But when combined and analyzed, the data can start to tell a better story, as was the case in my draft.

The technical skills required to build this application were also quite minimal.  The bulk of what I used was built-in functionality of the Interactive Report in APEX.  Merging the data and linking the two tables was really the only true technical portion of this, and that's even something that can be done by a novice.

So the next time you have a stack of data that may be somehow related, resist the temptation to use old methods when trying to analyze it.  Get it into the database, merge it as best you can, and let APEX do the rest.

Screaming at Each Other

Thu, 2015-02-19 20:20
Every time I attend a conference, the Twitter traffic about said conference is obviously higher.  It starts a couple weeks or even months before, builds steadily as the conference approaches, and then hits a crescendo during the conference.  For the past few conferences, I’ve started my sessions by asking who in the audience uses Twitter.  Time and time again, I only get about 10-20% of the participants say that they do.  That means that up to 90% of the participants don’t.  That’s a lot of people.  My informal surveys also indicate a clear generation gap.  Of those that do use Twitter, they tend to be around 40 years old or younger.  There are of course exceptions to this rule, but by and large this is the evidence that I have seen.

I actually took about 10 minutes before my session today to attempt to find out why most people don’t care about Twitter.  The answer was very clear and consistent: there’s too much crap on there.  And they are correct.  I’d guess that almost 100% of all Tweets are useless or at least irrelevant to an Oracle professional.

I then took a few minutes to explain the basics of how it worked - hash tags, followers, re-tweets and the like.  Lots of questions and even more misconceptions.  “So does someone own a hash tag?” and “Can I block someone that I don’t care for” were some of the questions that I addressed.  

After a few more questions, I started to explain how it could benefit them as Oracle professionals.  I showed them that most of the Oracle APEX team had accounts.  I also highlighted some of the Oracle ACEs.  I even showed them the RMOUG hash tag and all of the tweets associated with it.  Light bulbs were starting to turn on.

But enough talking.  It was time for a demo.  To prove that people are actually listening, I simply tweeted this:
Please reply if you follow #orclapex - want to see how many people will in the next 30 mins. Thanks!
— Scott Spendolini (@sspendol) February 19, 2015
Over the next 30 minutes, I had 10 people reply. At the end of the session, I went through the replies, and said what I knew about those who did reply.  Oracle Product Manager, Oracle Evangelist, Oracle ACE, APEX expert, etc.  The crowd was stunned.  This proved that Twitter as a medium to communicate with Oracle experts was in fact, real.  

More questions.  “Can I Tweet to my power company if I have an issue with them?” and “Do people use profanity on Twitter?” were some of the others.  People were clearly engaged and interested.  Mission accomplished.

The bigger issue here is that I strongly feel that the vast majority of the Oracle community is NOT on Twitter.  And that is a problem, because so much energy is spent tweeting about user groups and conferences.  It's like we’re just screaming at each other, and not at those who need to listen.  

We can fix this.  I encourage everyone who presents at a conference to take 5 minutes at the beginning or end of their session to talk about the benefits of Twitter.  Demonstrate that if you follow Oracle experts, the content that will be displayed is not about Katy Perry, but rather about new features, blog posts or other useful tidbits that can help people with their jobs. Take the time to show them how to sign up, how to search for content, and who to follow.  I think that if we all put forth a bit of effort, we can recruit many of those to join the ranks of Twitter for all the right reasons, and greatly increase the size of the Oracle community that’s connected via this medium.

Oracle APEX 5 Update from OOW

Wed, 2014-10-01 09:18
The big news about Oracle APEX from OOW is not so much about what, but more about when.  Much to many people's disappointment, APEX 5.0 is still going to be a few months out.  The "official" release date has been updated from "calendar year 2014" to "fiscal year 2015".  For those not in the know, Oracle's fiscal year ends on May 31st, so that date represents the new high-water mark.

Despite this bit of bad news, there were a number of bits of good news as well.  First of all, there will be an EA3.  This is good because it demonstrates that the team has been hard at work fixing bugs and adding features.  Based on the live demonstrations that were presented, there are some subtle and some not-so-subtle things to look forward to.  The subtle include an even more refined UI, complete with smooth fade-through transitions.  I tweeted about the not-so-subtle the other day, but to recap here: pivot functionality in IRs, column toggle and reflow in jQuery Mobile.

After (or right before - it wasn't 100% clear) that E3 is released, the Oracle APEX team will host their first public beta program.  This will enable select customers to download and install APEX 5.0 on their own hardware.  This is an extraordinary and much-needed positive change in their release cycle, as for the first time, customers can upgrade their actual applications in their environment and see what implications APEX 5.0 will bring.  Doing a real-world upgrade on actual APEX applications is something that the EA instances could never even come close to pulling of.

After the public beta, Oracle will upgrade their internal systems to APEX 5.0 - and there's a lot of those.  At last count, I think the number of workspaces was just north of 3,000.  After the internal upgrade, apex.oracle.com will have it's turn.  And once that is complete, we can expect APEX 5.0 to be released.

No one like delays.  But in this case, it seems that the extra time required is quite justified, as APEX 5.0 still needs some work, and the upgrade path from 4.x needs to be nothing short of rock-solid.  Keep in mind that with each release, there are a larger number of customers using a larger number of applications, so ensuring that their upgrade experience is as smooth as possible is just as, if not more important than any new functionality.

In the mean time, keep kicking the tires on the EA instance and provide any feedback or bug reports!

Take a Walk

Mon, 2014-07-14 11:24
Steven Feuerstein (https://twitter.com/stevefeuerstein) just tweeted this:

Improve your programming with a daily regimen of situps (or anything you can do to strengthen abs), walks in the woods, and lots of water.
— Steven Feuerstein (@stevefeuerstein) July 14, 2014 Which in turn, inspired me to quickly write this post.

The combination of being in IT and working from home leads to lots of hours logged in some sort of chair, whether its in my home office, at a customer site or a coffee shop.  You don't need to be a doctor to realize that this is not particularly healthy behavior.

So for the past few months, I've incorporated something new into my daily routine: taking a walk.  It doesn't sound like much, and quite honestly, it really isn't.  But, I wish that I had started this years ago, because the benefits of it are huge.

First of all, it's nice to get outside during the day, especially when it's actually nice out.  Nothing can quite compare to it, no matter how many pixels they squeeze into a tablet.  Sometimes I just walk at a leisurely pace, other times I run.  I'm not training for any specific race, nor do I feel compelled to share my statistics over social media.  I just do what I want when I can.

Second of all, it gives me some time to either listen to a podcast, music or to just think.  I've really grown to like the podcasts that the folks at TWiT (http://www.twit.tv) produce, with This Week in Tech being one of my favorites.  Listening to something that interests you makes the time go by so much quicker, that you may even be tempted to extend your distance to accommodate the extra content.

In fact, listening to them really puts me in a creative and inspired mood, which helps explain the third benefit: background processing.  I don't know much about neuroscience, but I do know a little bit how my brain works.  If I'm struggling with a difficult problem, I've learned over time that the best thing that I can do is to literally walk away from it.  Going on a walk or run or even a drive allows my brain to "background process" that problem while I focus on other things.  The "A-Ha!" moment that I have is my brain's way of alerting me once the problem has been solved.   Corny, I know, but that's how it works for me.

And lastly - and probably most importantly - I've been able to drop a few pounds because of my walks (combined with better eating habits).  I do use RunKeeper to log my walks and track my weight, because numbers simply don't lie.  It also serves as a source of inspiration if I can beat a personal record or cross a weight milestone.

Next ORCLAPEX NOVA Meetup: July 17th

Mon, 2014-07-14 07:37
The next Meetup for the ORCLAPEX NOVA Meetup Group will be this Thursday, July 17th at 7:00PM at Oracle Reston.  (Details: http://www.meetup.com/orclapex-NOVA/events/192592582/)

We're going to try the "Open Mic" format that has been wildly successful at KScope for the past few years.  The rules are quite simple: anyone can demonstrate their APEX-based solution for up to 10 minutes.  No need to reserve a spot or spend too much time planning.  And as always, no slides will be permitted - strictly demo.

Anyone and everyone is welcome to present - even if you have never presented before.  We're a welcoming group, so please don't be shy nor feel intimidated!  I've actually seen quite an amazing selection of APEX-based solutions at prior open mic nights from people who have never presented before, so I encourage everyone to give it a try.

While there is WiFi available at Oracle, it's always best to have a local copy of your demonstration, just in case you can't connect or the network is having issues.

See you Thursday night!

ORCLAPEX NOVA Update - Columbus Brings It

Mon, 2014-05-19 05:31

For the upcoming inaugural ORCLAPEX NOVA MeetUp on May 29th, not only will we have Mike Hichwa, Shakeeb Rahman and David Gale from the Reston-based Oracle APEX development team present, but we will also have the entire Columbus, OH based APEX team in attendance, as well: both Joel Kallman and Jason Straub will be in town and have RSVP’ed to the MeetUp!

Outside of major conferences such as KScope or OpenWorld, there is no other public forum that will have the same level of APEX expertise from the team that develops the product present!  So what are you waiting for?  Join the rest of us who have already RSVP’ed to this event, as it’s 100% free, and you’re sure to learn a bunch about APEX 5.0 and other exciting happenings in the Database Development world at Oracle.

Note: you have to be a member of MeetUp (which is free to join) and RSVP to the event to attend (which is also free), as a list of people needs to be provided to Oracle the day before the event occurs.

BLOBs in the Cloud with APEX and AWS S3

Wed, 2014-05-14 15:09
Overview

Recently, I was working with one of our customers and ran into a rather unique requirement and an uncommon constraint. The customer - Storm Petrel - has designed a grant management system called Tempest.  This system is designed to aid local municipalities when applying for FEMA grants after a natural disaster occurs.  As one can imagine, there is a lot of old fashioned paperwork when it comes to managing such a thing.

Thus, the requirement called for the ability to upload and store scanned documents.  No OCR or anything like that, but rather invoices and receipts so that a paper trail of the work done and associated billing activity can be preserved.  For APEX, this can be achieved without breaking a sweat, as the declarative BLOB feature can easily upload a file and store it in a BLOB column of a table, complete with filename and MIME type.

However, the tablespace storage costs from the hosting company for the anticipated volume of documents was considerable.  So much so that the cost would have to be factored into the price of the solution for each customer, making it more expensive and obviously less attractive.

My initial thought was to use Amazon’s S3 storage solution, since the costs of storing 1GB of data for a month is literally 3 cents.  Data transfer prices are also ridiculously inexpensive, and from what I have seen via marketing e-mails, the price of this and many of Amazon’s other AWS services have been on a downward trend for some time.

The next challenge was to figure out how to get APEX integrated with S3.  I have seen some of the AWS API documentation, and while there are ample examples for Java, .NET and PHP, there is nothing at all for PL/SQL.  Fortunately, someone else has already done the heavy lifting here: Morten Braten & Jeffrey Kemp.

Morten’s Alexandria PL/SQL Library is an amazing open-source suite of PL/SQL utilities which provide a number of different services, such as document generation, data integration and security.  Jeff Kemp has a presentation on SlideShare that best covers the breadth of this utility.  You can also read about the latest release - 1.7 - on Morton’s blog here.  You owe it to yourself to check out this library whether or not you have any interest in AWS S3!

In this latest release of the library, Jeff Kemp has added a number of enhancements to the S3 integration piece of the framework, making it quite capable of managing files on S3 via a set of easy to use PL/SQL APIs.  And these APIs can be easily & securely integrated into APEX and called from there.  He even created a brief presentation that describes the S3 APIs.

Configuring AWS Users and Groups

So let’s get down to it.  How does all of this work with APEX?  First of all, you will need to create an AWS account.  You can do this by navigating to http://aws.amazon.com/ and clicking on Sign Up.  The wizard will guide you through the account creation process and collect any relevant information that it needs.  Please note that you will need to provide a valid credit card in order to create an AWS account, as they are not free, depending on which services you choose to use.

Once the AWS account is created, the first thing that you should consider doing is creating a new user that will be used to manage the S3 service.  The credentials that you use when logging into AWS are similar to root, as you will be able to access and manage and of the many AWS services.  When deploying only S3, it’s best to create a user that can only do just that.

To create a new user:

1) Click on the Users tab

2) Click Create New User

3) Enter the User Name(s) and click Create.  Be sure that Generate an access key for each User is checked.

2014 05 08 10 49 18
Once you click Create, another popup region will be displayed.  Do not close this window!  Rather, click on Show User Security Credentials to display the Access Key ID and Secret Access Key ID.  Think of the Access Key ID as a username and the Secret Access Key ID as a password, and then treat them as such.

2014 05 08 10 51 29
For ease of use, you may want to click Download Credentials and save your keys to your PC.

The next step is to create a Group that your new user will be associated with.  The Group in AWS is used to map a user or users to a set of permissions.  In this case, we will need to allow our user to have full access to S3, so we will have to ensure that the permissions allow for this.  In your environment, you may not want to grant as many privileges to a single user.

To create a new group:

1) Click on the Groups tab

2) Click on Create New Group

3) Enter the Group Name, such as S3-Admin, and click Continue

The next few steps may vary depending on which privileges you want to assign to this group.  The example will assume that all S3 privileges are to be assigned.

4) Select Policy Generator, and then click on the Select button.

5) Set the AWS Service drop down to Amazon S3.

6) Select All Actions (*) for the Actions drop down.

7) Enter arn:aws:s3:::* for the Amazon Resource Name (ARN) and click Add Statement.  This will allow access to any S3 resource.  Alternatively, to create a more restricted group, a bucket name could have been specified here, limiting the users in this group to only be able to manage that specific bucket.

8) Click Continue.

9) Optionally rename the Policy Name to something a little less cryptic and click Continue.

10) Click Create group to create the group.
The animation below illustrates the previous steps:

Next, we’ll add our user to the newly created group.

1) Select the group that was just created by checking the associated checkbox.

2) Under the Users tab, click Add Users to Group.

3) Select the user that you want to add and then click Add Users.

The user should now be associated with the group.

2014 05 08 11 55 45
Select the Permissions tab to verify that the appropriate policy is associated with the user.

2014 05 08 11 56 15
At this point, the user management portion of AWS is complete.

Configuring AWS S3

The next step is to configure the S3 portion. To do this, navigate to the S3 Dashboard:

1) Click on the Services tab at the top of the page.

2) Select S3.

You should see the S3 dashboard now:

2014 05 08 13 53 35
S3 uses “buckets" to organize files.  A bucket is just another word for a folder.  Each of these buckets have a number of different properties that can be configured, making the storage and security options quite extensible.  While there is a limit of 100 buckets per AWS account, buckets can contain folders, and when using the AWS APIs, its fairly easy to provide a layer of security based on a file’s location within a bucket.

Let’s start out by creating a bucket and setting up some of the options.

1) Click on Create Bucket.

2) Enter a Bucket Name and select the Region closest to your location and click Create.  One thing to note - the Bucket Name must be unique across ALL of AWS.  So don’t even try demo, test or anything like that.

3) Once your bucket is created, click on the Properties button.

2014 05 08 14 01 07
I’m not going to go through all of the properties of a bucket in detail, as there are plenty of other places that already have that covered.  Fortunately, for our purposes, the default settings on the bucket should suffice.  It is worth taking a look at these settings, as many of them - such as Lifecycle and Versioning - can definitely come in handy and reduce your development and storage costs.

Next, let’s add our first file to the bucket.  To do this:

1) Click on the Bucket Name.

2) Click on the Upload button.

3) A dialog box will appear.  To add a file or files, click Add Files.

4) Using the File Upload window, select a file that you wish to upload.  Select it and click Open.

5) Click Start Upload to initiate the upload process.
Depending on your file size, the transfer will take anywhere from a second to several minutes.  Once it’s complete, your file should be visible in the left side of the dashboard.

6) Click on the recently uploaded file.

7) Click on the Properties button.

2014 05 08 14 18 29
Notice that there is a link to the file displayed in the Properties window.  Click on that link.  You’re probably looking at something like this now:

2014 05 08 14 20 59
That is because by default, all files uploaded to S3 will be secured.  You will need to call an AWS API to generate a special link in order to access them.  This is important for a couple of reasons.  First off, you clearly don’t want just anyone accessing your files on S3.  Second, even if securing files is not a major concern, keep in mind that S3 also charges for data transfer.  Thus, if you put a large public file on S3, and word gets out as to its location, charges can quickly add up as many people access that file.  Fortunately, securely accessing files on S3 from APEX is a breeze with the Alexandria PL/SQL libraries.  More on that shortly.

If you want to preview any file in S3, simply right-click on it and select Open or Download.  This is also how you rename and delete files in S3.  And only authorized AWS S3 users will be able to perform these tasks, as the S3 Dashboard requires a valid AWS account.

Installing AWS S3 PL/SQL Libraries
The next thing that needs to be installed is the Alexandria PL/SQL library - or rather just the Amazon S3 portion of it.  There is no need to install the rest of the components, especially if they are not going to be used.  For ease of use, these objects can be installed directly into your APEX parse-as schema.  
However, they can also be installed into a centralized schema and then made available to other schemas that need to use them.

There are eight files that need to be installed, as well as a one-off command.

1) First, connect to your APEX parse-as schema and run the following script:
create type t_str_array as table of varchar2(4000)
/
2) Next, the HTTP UTIL & DEBUG packages needs to be installed.  This allows the database to retrieve files from S3 as well as provides a debugging infrastructure.

To install these packages, run the following four scripts as your APEX parse-as schema:
/plsql-utils-v170/ora/http_util_pkg.pks
/plsql-utils-v170/ora/http_util_pkg.pkb
/plsql-utils-v170/ora/debug_pkg.pkb
/plsql-utils-v170/ora/debug_pkg.pkb
Before running the S3 scripts, the package body of AMAZON_AWS_AUTH_PKG needs to be modified so that your AWS credentials are embedded in it.

3) Edit the file amazon_aws_auth_pkg.pkb in a text editor.

4) Near the top of the file are three global variable declarations: g_aws_id, g_aws_key and g_gmt_offset.  Set the values of these three variables to the Access Key ID, Secret Key ID and GMT offset.  These values were displayed and/or downloaded when you created your AWS user.  If you did not record these, you will have to create a new pair back in the User Management dashboard.
Here’s an example of what the changes will look like, with the values obfuscated:
g_aws_id     varchar2(20) := 'XXXXXXXXXXXXXXXXXXX'; -- AWS Access Key ID
g_aws_key varchar2(40) := 'XXXXXXXXXXXXXXXXXXX'; -- AWS Secret Key
g_gmt_offset number := 4; -- your timezone GMT adjustment (EST = 4, CST = 5, MST = 6, PST = 7) 
It is also possible to store these values in a more secure place, such as an encrypted column in a table and then fetch them as they are needed.

5) Once the changes to amazon_aws_auth_pkg.pkb are made, save the file.

6) Next, run the following four SQL scripts in the order below as your APEX parse-as schema:
/plsql-utils-v170/ora/amazon_aws_auth_pkg.pks
/plsql-utils-v170/ora/amazon_aws_auth_pkg.pkb
/plsql-utils-v170/ora/amazon_aws_s3_pkg.pks
/plsql-utils-v170/ora/amazon_aws_s3_pkg.pks
If there are no errors, then we’re almost ready to test and see if we can view S3 via APEX!

IMPORTANT NOTE
: The S3 packages in their current form do not offer support for SSL.  This is a big deal, since any request that is made to S3 will be done in the clear, putting the contents of your files at risk as they are transferred to and from S3. There is a proposal on the Alexandria Issues Page that details this deficiency.

I have made some minor alterations to the AMAZON_AWS_S3_PKG package which accommodate using SSL and Oracle Wallet when calling S3. You can download it from here.  When using this version, there are three additional package variables that need to be altered:
 
g_orcl_wallet_path constant varchar2(255) := 'file:/path_to_dir_with_oracle_wallet';
g_orcl_wallet_pw constant varchar2(255) := 'Oracle Wallet Password';
g_aws_url_http constant varchar2(255) := 'https://'; -- Set to either http:// or https:// 
Additionally, Oracle Wallet will need to be configured with the certificate from s3.amazonaws.com.  Jeff Hunter has an excellent and easy to follow post on configuring Oracle Wallet here that will guide you through configuring Oracle Wallet.  
Configuring the ACL
Starting with Oracle 11g, an Access Control List - or ACL - restricts which outbound transactions are allowed to occur.  By default, none of them are.  Thus, we will need to configure the ACL to allow our schema to access Amazon’s servers.
 
To create the ACL, run the following script as SYS, replacing it with your specific values:
BEGIN
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL
(
acl         => 'apex-s3.xml',   
description => 'ACL for APEX-S3 to access Amazon S3',   
principal   => 'APEX_S3',    
is_grant    => TRUE,    
privilege   => 'connect'   
);
DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL     
 
acl         => 'apex-s3.xml',   
host        => '*.amazonaws.com',   
lower_port  => 80,   
upper_port  => 80   
);
COMMIT;
END;
/
IMPORTANT NOTE: When using SSL, the lower_port and upper_port values should both be set to 443.
Integration with APEX
At this point, the integration between Oracle and S3 should be complete.  We can run a simple test to verify this.  
From the SQL Workshop, enter and execute the following SQL, replacing apex-s3-integration with the name of your bucket: 
SELECT * FROM table (amazon_aws_s3_pkg.get_object_tab(p_bucket_name => 'apex-s3-integration'))

This query will return all files that are stored in the bucket apex-s3-integration, as shown below:

2014 05 08 14 56 42

If you see the file that you previously uploaded, then everything is working as it should!
 
Now that the configuration is complete, the next step is to build the actual integration into APEX so that we can upload and download BLOBs to/from S3.  In this example, we will build a simple APEX form & report that allows the user to upload, download and delete content from an S3 bucket.  This example uses APEX 4.2.5, but it should work in previous releases, as it uses nothing too version specific.

To create a simple APEX & S3 Integration application:
 
1) Create a new Database application.  Be sure to set Create Options to Include Application Home Page, and select any theme that you wish.  This example will use Theme 25.
 
2) Edit Page 1 of your application and create a new Interactive Report called “S3 Files”.  Use the following SQL as the source of the report, replacing apex-s3-integration with your bucket name:
SELECT
key,
size_bytes,
last_modified,
amazon_aws_s3_pkg.get_download_url
(
p_bucket_name => 'apex-s3-integration',
p_key => key,
p_expiry_date => SYSDATE + 1
) download,
key delete_doc
FROM
table (amazon_aws_s3_pkg.get_object_tab(p_bucket_name => 'apex-s3-integration')) 
The SQL call the pipeline function get_object_tab from the S3 package, and return all files in the corresponding bucket.  Since the AWS credentials are embedded in the S3 package, they do not need to be entered here.
 
Since the bucket that we created is not open to the public, all access to it is restricted.  Only when a specific Access Key ID, Signature and Expiration Date are passed in will the file be accessible.  To do this, we need to call another API to generate a valid link with that information included.  The get_download_url API takes in three parameters - bucket_name, key and expiry date.  The first two are self-explanatory, whereas the third will determine how long the specific download link will be good for.  In our example, it is set to 1 day, but it can be set to any duration of time.  
 
Once the report is created, run and login to your application. The first page will display something similar to this:
 
2014 05 15 07 31 18
Upon closer inspection of the Download link, the AWSAccessKeyId, Expires and Signature parameters can be seen.  These were automatically generated from the S3 API, and this link can be used to download the corresponding file.  
 
3) Edit the KEY column of the Interactive Report.  Enter #KEY# for the Link Text, set the Target to URL and enter #DOWNLOAD# for the URL and click Apply Changes.
 
Create a Download Link on the KEY Column
 
Next, since we no longer need to display the Download column in the report, it should be set to hidden.
 
4) Edit the Report Attributes of the Interactive Report.
 
5) Set Display Text As for the Download column to Hidden and click Apply Changes.
 
Set the Download column to Hidden
 
When running Page 1 now, the name of the file should be a link.  Clicking on the link should either display or download the file from S3, depending on what type of file was uploaded.
 
To delete an item, we will have to call the S3 API delete_object and pass in the corresponding bucket name and key of the file to be deleted.  We can handle this easily via a column link that calls a Dynamic Action.
 
Before we get started, we’ll need to create a hidden item on Page 1 that will store the file key.
 
6) Create a new Hidden Item on Page 1 called P1_KEY.
 
7) Make sure that the Value Protected attribute is set to Yes, take all the defaults on the next page, and create the item.
 
Next, let’s edit the DELETE_DOC column and set the link that will trigger a Dynamic Action which will in turn, delete the document from S3.
 
8) Edit the DELETE_DOC column.
 
9) Enter Delete for the Link Text and enter the following for the Link Attributes: id="#KEY” class=“deleteDoc”  Next, set the Target to URL and enter # in the URL field and click Apply Changes.
 
Set the Link Attributes of the DELETE_DOC Column
 
Next, let’s create the Dynamic Action.
 
10) Create a new Dynamic Action called Delete S3 Doc.  It should be set to fire when the Event is a Click, and when the jQuery Selector is equal to .deleteDoc
 
Definition of the Delete S3 Doc Dynamic Action
 
11) Next, the first True action should be set to Confirm, and the message “Are you sure that you want to delete this file” be entered into the Text region.
 
Add the Confirm True Action
 
12) Click Next and then click Create Dynamic Action.
 
At this point, if you click on the Delete link, a confirmation message should be displayed.  Clicking OK will have no impact, as additional True actions need to be first added to the Dynamic Action.  Let’s create those True actions now.
 
13) Expand the Delete S3 Doc Dynamic Action and right-click on True and select Create.
 
14) Set the Action to Set Value, uncheck Fire on Page Load, and enter the following for JavaScript Expressionthis.triggeringElement.id;  Next, set the Selection Type to Item(s) and enter P1_KEY for the Item(s) and click Create.
 
Create the Set Value True Action
 
15) Create another True action by clicking on Add True Action.
 
16) Set the Action to Execute PL/SQL Code and enter the following for PL/SQL Code, replacing apex-s3-integration with the name of your bucket:
amazon_aws_s3_pkg.delete_object 
  (
  p_bucket_name => 'apex-s3-integration',
  p_key => :P1_KEY
  );

Enter P1_KEY for Page Items to Submit and click Create.

Create Execute PL/SQL Code True Action

 
17) Create another True action by clicking on Add True Action.
 
18) Set the Action to Refresh, set the Selection Type to Region, and set the Region to S3 Files (20) and click Create.
 
Create a Refresh True Action
 
At this point, if you click on the Delete link for a file in S3, you should be prompted to confirm the deletion, and if you click OK, the file will be removed from S3 and the Interactive Report refreshed.  A quick check of your AWS S3 Dashboard should show that the file is in fact, deleted.  
 
The last step is to create a page that allows the user to upload files to S3.  This can easily be done with a File Upload APEX item and a simple call to the S3 API new_doc.
 
19) Create a new blank page called Upload S3 File and set the Page Number to 2.  Re-use the existing tab set and tab and optionally add a breadcrumb to the page and set the parent to the Home page.
 
20) On Page 2, create a new HTML region called Upload S3 File.
 
Next, we’ll add a File Browse item to the page.  This item will use APEX’s internal table WWV_FLOW_FILES to temporarily store the BLOB file before uploading it to S3.
 
21) In the new region, create a File Browse item called P2_DOC and click Next.
 
22) Set the Label to Select File to Upload and the Template to Required (Horizontal - Right Aligned) and click Next.
 
23) Set Value Required to Yes and Storage Type to Table WWV_FLOW_FILES and click Next.
 
24) Click Create Item.
 
Next, we’ll add a button that will submit the page.
 
25) Create a Region Button in the Upload S3 File region.
 
26) Set the Button Name to UPLOAD, set the Label to Upload, set the Button Style to Template Based Button and set the Button Template to Button and click Next.
 
27) Set the Position to Region Template Position #CREATE# and click Create Button.
 
Next, the PL/SQL Process that will call the S3 API needs to be created.
 
28) Create a new Page Process in the Page Processing region.
 
29) Select PL/SQL and click Next.
 
30) Enter Upload File to S3 for the Name and click Next.
 
31) Enter the following for the region Enter PL/SQL Page Process, replacing apex-s3-integration with the name of your bucket and click Next.
FOR x IN (SELECT * FROM wwv_flow_files WHERE name = :P2_DOC)
LOOP
 -- Create the file in S3
amazon_aws_s3_pkg.new_object
(
p_bucket_name => 'apex-s3-integration',
p_key => x.filename,
p_object => x.blob_content,
p_content_type => x.mime_type
);
END LOOP;
-- Remove the doc from WWV_FLOW_FILES
DELETE FROM wwv_flow_files WHERE name = :P2_DOC;
The PL/SQL above will loop through the table WWV_FLOW_FILES for the document just uploaded and pass the filename, MIME type and file itself to the new_object S3 API, which in turn will upload the file to S3.  The last line will ensure that the file is immediately removed from the WWV_FLOW_FILES table.
 
32) Enter File Uploaded to S3 for the Success Message and click Next.
 
33) Set When Button Pressed to UPLOAD (Upload) and click Create Process.
 
One more thing needs to be added - a branch that returns to Page 1 after the file is uploaded.
 
34) In the Page Processing region, expand the After Processing node and right-click on Branches and select Create.
 
35) Enter Branch to Page 1 for the Name and click Next.
 
36) Enter 1 for Page, check Include Process Success Message and click Next.
 
37) Set When Button Pressed to UPLOAD (Upload) and click Create Branch.
 
Now, you should be able to upload any file to your S3 bucket from APEX!
 
One more small addition needs to be made to our example application.  We need a way to get to Page 2 from Page 1.  A simple region button should do the trick.
 
38) Edit Page 1 of your application.
 
39) Create a Region Button in the S3 Files region.
 
40) Set the Button Name to UPLOAD, set the Label to Upload, set the Button Style to Template Based Button and set the Button Template to Button and click Next.
 
41) Set the Position to Right of the Interactive Report Search Bar and click Next.
 
42) Set the Action to Redirect to Page in this Application, enter 2 for Page and click Create Button.
 
That’s it!  You should now have a working APEX application that has the ability to upload, download and delete files from Amazon’s S3 service.
 
Finished Product
 
Please leave any questions or report any typos in the comments, and I’ll get to them as soon as time permits.

Announcing the ORCLAPEX NOVA Meetup Group

Thu, 2014-05-08 12:41

Following in the footsteps of a few others, I’m happy to announce the formation and initial meeting of the ORCLAPEX NOVA (Northern Virginia) group!  

As Dan McGhan and Doug Gault have mentioned in their blogs, a bunch of us who are regular APEX users are trying to continue to grow the community by providing in-person meetings where we can meet other APEX developers and trade stories, tips and anything else.  Each of the groups is independently run by the local organizers, so the formats and topics will vary from group to group, but the core content will always be focused around Oracle APEX. Groups will also be vendor-neutral, meaning that the core purpose of the group is to provide education and facilitate the sharing of APEX-related ideas, not to market services of products.

Right now, there are a number of groups already formed across the world: 

I’m happy to announce that the first meeting of the ORCLAPEX NOVA group will be Thursday, May 29th, 2014 at Oracle’s Reston office in Reston, VA at 7:00 PM.  Details about the event can be found here.  We will start the group with a bang, as Mike Hichwa, VP of Database Development at Oracle, will be presenting APEX 5.0 New Features for the bulk of the meeting.  You can guarantee that we’ll get to see the latest and greatest features being prepared for the upcoming APEX 5.0 release.  Here’s the rest of the agenda:

 7:00 PM Pizza & Sodas; informal chats 

 7:15 PM Welcome - Scott Spendolini, Enkitec 

 7:30 PM APEX 5.0 - Mike Hichwa, Oracle Corporation 

 9:00 PM Wrap Up & Poll for Next MeetUp

IMPORTANT: In order to attend, you must create a MeetUp.com account, join the group and RSVP.  You will also have to use your real name, as it will be provided to Oracle Security prior to the event, and if you’re not listed, you may not be able to attend.  All communications and announcements will be facilitated via the MeetUp.com site as well.

Also, not all meetings need to be at the Oracle Reston facility; we’re using it because Mike & Shakeeb were able to secure the room for free, and it’s relatively central to Arlington, Fairfax and Loudoun Counties.  Part of what we’ll have to figure out is how many smaller, more local groups we may want to form (i.e. PW County, DC, MD, etc.) and whether or not we should try to keep them loosely associated.  One thought that I had would be for the smaller groups to meet more locally and frequently, and for all of the groups to seek out presenters for an “all hands” type meeting that we can move around the region.  All options are on the table at this point.

I look forward to meeting many of you in person on the 29th!

Pages