Scott Spendolini

Sumner Technologies: Take Two
About a month ago, I left my position at Accenture Enkitec Group. I had a couple of ideas as to what I wanted to do next, but nothing was 100% solid. After considering a couple of different options, I'm happy to announce that together with Doug Gault & Tim St. Hilaire, we're re-launching Sumner Technologies.
Much like last time, the focus will be on Oracle APEX; but we’re going to refine that focus a little bit. In addition to traditional consulting, we’re going to focus more on higher-level services, such as security reviews and APEX health checks, as well as produce a library of on-demand training content. APEX has matured tremendously over the past few years, and we feel that these services will complement the needs of the marketplace.
It’s exciting to be starting things over, so to speak. Lots will be the same, but even more will be different. There’s a lot of work to be done (yes, I know the site is not in APEX - yet), but we’re excited at the potential of what we’re going to offer APEX customers, as the APEX marketplace is not only more mature, but it’s also grown and will continue to do so.
Feel free to check out what we’re up to on Facebook, Twitter, LinkedIn and our website. Or find us at KScope in a couple of weeks!
Destroying The Moon
Like all good things, that run has come to an end. Last Friday was my final day at Accenture, and I am once again back in the arena of being self-employed. Without any doubt, I am leaving behind some of the best minds in the Oracle community. However, I am not leaving behind the new friendships that I have forged over the past three years. Those will come with me and hopefully remain with me for many, many years to come.
Making the jump for the second time is not nearly as scary as it was the first time, but it's still an emotional move. Specifically what's next for me? That's a good questions, as the answer is not 100% clear yet. There's a lot of possibilities, and hopefully things will be a lot more defined at the end of the week.
#letswreckthistogether
Little League, Big Data
In preparation for the draft, we were sent a couple of key spreadsheets. The first one had an average rating of all of the kids tryouts assessments, done by the board members. The second one contained coaches evaluations for some of the players from past seasons. Lots and lots of nothing more than raw data.
Time to fire up APEX. I created a workspace on my laptop, as I was not sure if we would have WiFi at the draft. From there, I imported both spreadsheets into tables, and got to work on creating a common key. Luckily, the combination of first and last name produced no duplicates, so it was pretty easy to link the two tables. Next, I created a simple IR based on the EVALS table - which was the master. This report showed all of the tryout scores, and also ranked each player based on the total score.
Upon editing a row in EVALS, I had a second report that showed a summary of the coach's evaluation from prior seasons. I could also make edits to the EVALS table, such as identify players that I was interested in, players that were already drafted, and any other comments that I wanted to track.
After about 20 minutes of reviewing the data, I noticed something. I was using data collected while the player was under a lot of stress. The data set was also small, as each player only got 5 pitches, 5 catches, 5 throws, etc. The better indicator as to a player's talents was in the coach's evaluations, as that represents an entire season of interaction with the player, not just a 3-4 minute period.
Based on this, I was quickly able to change my IR on the first page to also include a summary of the coach's evaluations alongside the tryout evaluations. I sorted my report based on that, and got a very different order. This was the order that I was going to go with for my picks.
Once the draft started, it was very easy to mark each player as drafted, so that any drafted player would no longer show up in the report. It was also trivial to toggle the "must draft" column on and off, ensuring that if there were any younger players that I wanted, I could get them in the early rounds before we had to only draft older players.
Each time it was my pick, I already knew which player that I was going to draft. Meanwhile, the other coaches shuffled stacks of marked up papers and attempted to navigate multiple spreadsheets when it was theirs. Even the coordinator commented on how I was always ready and kept things moving along.
Unless you're some sort of youth athletics coach that does a draft, this application will likely do you little good. But the concept can go a long way. In almost any role in any organization, you likely have data for something scattered across a few different sources or spreadsheets. This data, when isolated, only paints a blurry part of the whole picture. But when combined and analyzed, the data can start to tell a better story, as was the case in my draft.
The technical skills required to build this application were also quite minimal. The bulk of what I used was built-in functionality of the Interactive Report in APEX. Merging the data and linking the two tables was really the only true technical portion of this, and that's even something that can be done by a novice.
So the next time you have a stack of data that may be somehow related, resist the temptation to use old methods when trying to analyze it. Get it into the database, merge it as best you can, and let APEX do the rest.
Screaming at Each Other
— Scott Spendolini (@sspendol) February 19, 2015
Oracle APEX 5 Update from OOW
Despite this bit of bad news, there were a number of bits of good news as well. First of all, there will be an EA3. This is good because it demonstrates that the team has been hard at work fixing bugs and adding features. Based on the live demonstrations that were presented, there are some subtle and some not-so-subtle things to look forward to. The subtle include an even more refined UI, complete with smooth fade-through transitions. I tweeted about the not-so-subtle the other day, but to recap here: pivot functionality in IRs, column toggle and reflow in jQuery Mobile.
After (or right before - it wasn't 100% clear) that E3 is released, the Oracle APEX team will host their first public beta program. This will enable select customers to download and install APEX 5.0 on their own hardware. This is an extraordinary and much-needed positive change in their release cycle, as for the first time, customers can upgrade their actual applications in their environment and see what implications APEX 5.0 will bring. Doing a real-world upgrade on actual APEX applications is something that the EA instances could never even come close to pulling of.
After the public beta, Oracle will upgrade their internal systems to APEX 5.0 - and there's a lot of those. At last count, I think the number of workspaces was just north of 3,000. After the internal upgrade, apex.oracle.com will have it's turn. And once that is complete, we can expect APEX 5.0 to be released.
No one like delays. But in this case, it seems that the extra time required is quite justified, as APEX 5.0 still needs some work, and the upgrade path from 4.x needs to be nothing short of rock-solid. Keep in mind that with each release, there are a larger number of customers using a larger number of applications, so ensuring that their upgrade experience is as smooth as possible is just as, if not more important than any new functionality.
In the mean time, keep kicking the tires on the EA instance and provide any feedback or bug reports!
Take a Walk
Improve your programming with a daily regimen of situps (or anything you can do to strengthen abs), walks in the woods, and lots of water.
— Steven Feuerstein (@stevefeuerstein) July 14, 2014 Which in turn, inspired me to quickly write this post.
The combination of being in IT and working from home leads to lots of hours logged in some sort of chair, whether its in my home office, at a customer site or a coffee shop. You don't need to be a doctor to realize that this is not particularly healthy behavior.
So for the past few months, I've incorporated something new into my daily routine: taking a walk. It doesn't sound like much, and quite honestly, it really isn't. But, I wish that I had started this years ago, because the benefits of it are huge.
First of all, it's nice to get outside during the day, especially when it's actually nice out. Nothing can quite compare to it, no matter how many pixels they squeeze into a tablet. Sometimes I just walk at a leisurely pace, other times I run. I'm not training for any specific race, nor do I feel compelled to share my statistics over social media. I just do what I want when I can.
Second of all, it gives me some time to either listen to a podcast, music or to just think. I've really grown to like the podcasts that the folks at TWiT (http://www.twit.tv) produce, with This Week in Tech being one of my favorites. Listening to something that interests you makes the time go by so much quicker, that you may even be tempted to extend your distance to accommodate the extra content.
In fact, listening to them really puts me in a creative and inspired mood, which helps explain the third benefit: background processing. I don't know much about neuroscience, but I do know a little bit how my brain works. If I'm struggling with a difficult problem, I've learned over time that the best thing that I can do is to literally walk away from it. Going on a walk or run or even a drive allows my brain to "background process" that problem while I focus on other things. The "A-Ha!" moment that I have is my brain's way of alerting me once the problem has been solved. Corny, I know, but that's how it works for me.
And lastly - and probably most importantly - I've been able to drop a few pounds because of my walks (combined with better eating habits). I do use RunKeeper to log my walks and track my weight, because numbers simply don't lie. It also serves as a source of inspiration if I can beat a personal record or cross a weight milestone.
Next ORCLAPEX NOVA Meetup: July 17th
We're going to try the "Open Mic" format that has been wildly successful at KScope for the past few years. The rules are quite simple: anyone can demonstrate their APEX-based solution for up to 10 minutes. No need to reserve a spot or spend too much time planning. And as always, no slides will be permitted - strictly demo.
Anyone and everyone is welcome to present - even if you have never presented before. We're a welcoming group, so please don't be shy nor feel intimidated! I've actually seen quite an amazing selection of APEX-based solutions at prior open mic nights from people who have never presented before, so I encourage everyone to give it a try.
While there is WiFi available at Oracle, it's always best to have a local copy of your demonstration, just in case you can't connect or the network is having issues.
See you Thursday night!
ORCLAPEX NOVA Update - Columbus Brings It
For the upcoming inaugural ORCLAPEX NOVA MeetUp on May 29th, not only will we have Mike Hichwa, Shakeeb Rahman and David Gale from the Reston-based Oracle APEX development team present, but we will also have the entire Columbus, OH based APEX team in attendance, as well: both Joel Kallman and Jason Straub will be in town and have RSVP’ed to the MeetUp!
Outside of major conferences such as KScope or OpenWorld, there is no other public forum that will have the same level of APEX expertise from the team that develops the product present! So what are you waiting for? Join the rest of us who have already RSVP’ed to this event, as it’s 100% free, and you’re sure to learn a bunch about APEX 5.0 and other exciting happenings in the Database Development world at Oracle.
Note: you have to be a member of MeetUp (which is free to join) and RSVP to the event to attend (which is also free), as a list of people needs to be provided to Oracle the day before the event occurs.
BLOBs in the Cloud with APEX and AWS S3
Recently, I was working with one of our customers and ran into a rather unique requirement and an uncommon constraint. The customer - Storm Petrel - has designed a grant management system called Tempest. This system is designed to aid local municipalities when applying for FEMA grants after a natural disaster occurs. As one can imagine, there is a lot of old fashioned paperwork when it comes to managing such a thing.
Thus, the requirement called for the ability to upload and store scanned documents. No OCR or anything like that, but rather invoices and receipts so that a paper trail of the work done and associated billing activity can be preserved. For APEX, this can be achieved without breaking a sweat, as the declarative BLOB feature can easily upload a file and store it in a BLOB column of a table, complete with filename and MIME type.
However, the tablespace storage costs from the hosting company for the anticipated volume of documents was considerable. So much so that the cost would have to be factored into the price of the solution for each customer, making it more expensive and obviously less attractive.
My initial thought was to use Amazon’s S3 storage solution, since the costs of storing 1GB of data for a month is literally 3 cents. Data transfer prices are also ridiculously inexpensive, and from what I have seen via marketing e-mails, the price of this and many of Amazon’s other AWS services have been on a downward trend for some time.
The next challenge was to figure out how to get APEX integrated with S3. I have seen some of the AWS API documentation, and while there are ample examples for Java, .NET and PHP, there is nothing at all for PL/SQL. Fortunately, someone else has already done the heavy lifting here: Morten Braten & Jeffrey Kemp.
Morten’s Alexandria PL/SQL Library is an amazing open-source suite of PL/SQL utilities which provide a number of different services, such as document generation, data integration and security. Jeff Kemp has a presentation on SlideShare that best covers the breadth of this utility. You can also read about the latest release - 1.7 - on Morton’s blog here. You owe it to yourself to check out this library whether or not you have any interest in AWS S3!
In this latest release of the library, Jeff Kemp has added a number of enhancements to the S3 integration piece of the framework, making it quite capable of managing files on S3 via a set of easy to use PL/SQL APIs. And these APIs can be easily & securely integrated into APEX and called from there. He even created a brief presentation that describes the S3 APIs.
So let’s get down to it. How does all of this work with APEX? First of all, you will need to create an AWS account. You can do this by navigating to http://aws.amazon.com/ and clicking on Sign Up. The wizard will guide you through the account creation process and collect any relevant information that it needs. Please note that you will need to provide a valid credit card in order to create an AWS account, as they are not free, depending on which services you choose to use.
Once the AWS account is created, the first thing that you should consider doing is creating a new user that will be used to manage the S3 service. The credentials that you use when logging into AWS are similar to root, as you will be able to access and manage and of the many AWS services. When deploying only S3, it’s best to create a user that can only do just that.
To create a new user:
1) Click on the Users tab
2) Click Create New User
3) Enter the User Name(s) and click Create. Be sure that Generate an access key for each User is checked.
Once you click Create, another popup region will be displayed. Do not close this window! Rather, click on Show User Security Credentials to display the Access Key ID and Secret Access Key ID. Think of the Access Key ID as a username and the Secret Access Key ID as a password, and then treat them as such.
For ease of use, you may want to click Download Credentials and save your keys to your PC.
The next step is to create a Group that your new user will be associated with. The Group in AWS is used to map a user or users to a set of permissions. In this case, we will need to allow our user to have full access to S3, so we will have to ensure that the permissions allow for this. In your environment, you may not want to grant as many privileges to a single user.
To create a new group:
1) Click on the Groups tab
2) Click on Create New Group
3) Enter the Group Name, such as S3-Admin, and click Continue
The next few steps may vary depending on which privileges you want to assign to this group. The example will assume that all S3 privileges are to be assigned.
4) Select Policy Generator, and then click on the Select button.
5) Set the AWS Service drop down to Amazon S3.
6) Select All Actions (*) for the Actions drop down.
7) Enter arn:aws:s3:::* for the Amazon Resource Name (ARN) and click Add Statement. This will allow access to any S3 resource. Alternatively, to create a more restricted group, a bucket name could have been specified here, limiting the users in this group to only be able to manage that specific bucket.
8) Click Continue.
9) Optionally rename the Policy Name to something a little less cryptic and click Continue.
10) Click Create group to create the group.
The animation below illustrates the previous steps:
Next, we’ll add our user to the newly created group.
1) Select the group that was just created by checking the associated checkbox.
2) Under the Users tab, click Add Users to Group.
3) Select the user that you want to add and then click Add Users.
The user should now be associated with the group.
Select the Permissions tab to verify that the appropriate policy is associated with the user.
At this point, the user management portion of AWS is complete.
The next step is to configure the S3 portion. To do this, navigate to the S3 Dashboard:
1) Click on the Services tab at the top of the page.
2) Select S3.
You should see the S3 dashboard now:
S3 uses “buckets" to organize files. A bucket is just another word for a folder. Each of these buckets have a number of different properties that can be configured, making the storage and security options quite extensible. While there is a limit of 100 buckets per AWS account, buckets can contain folders, and when using the AWS APIs, its fairly easy to provide a layer of security based on a file’s location within a bucket.
Let’s start out by creating a bucket and setting up some of the options.
1) Click on Create Bucket.
2) Enter a Bucket Name and select the Region closest to your location and click Create. One thing to note - the Bucket Name must be unique across ALL of AWS. So don’t even try demo, test or anything like that.
3) Once your bucket is created, click on the Properties button.
I’m not going to go through all of the properties of a bucket in detail, as there are plenty of other places that already have that covered. Fortunately, for our purposes, the default settings on the bucket should suffice. It is worth taking a look at these settings, as many of them - such as Lifecycle and Versioning - can definitely come in handy and reduce your development and storage costs.
Next, let’s add our first file to the bucket. To do this:
1) Click on the Bucket Name.
2) Click on the Upload button.
3) A dialog box will appear. To add a file or files, click Add Files.
4) Using the File Upload window, select a file that you wish to upload. Select it and click Open.
5) Click Start Upload to initiate the upload process.
Depending on your file size, the transfer will take anywhere from a second to several minutes. Once it’s complete, your file should be visible in the left side of the dashboard.
6) Click on the recently uploaded file.
7) Click on the Properties button.
Notice that there is a link to the file displayed in the Properties window. Click on that link. You’re probably looking at something like this now:
That is because by default, all files uploaded to S3 will be secured. You will need to call an AWS API to generate a special link in order to access them. This is important for a couple of reasons. First off, you clearly don’t want just anyone accessing your files on S3. Second, even if securing files is not a major concern, keep in mind that S3 also charges for data transfer. Thus, if you put a large public file on S3, and word gets out as to its location, charges can quickly add up as many people access that file. Fortunately, securely accessing files on S3 from APEX is a breeze with the Alexandria PL/SQL libraries. More on that shortly.
If you want to preview any file in S3, simply right-click on it and select Open or Download. This is also how you rename and delete files in S3. And only authorized AWS S3 users will be able to perform these tasks, as the S3 Dashboard requires a valid AWS account.
However, they can also be installed into a centralized schema and then made available to other schemas that need to use them.
There are eight files that need to be installed, as well as a one-off command.
1) First, connect to your APEX parse-as schema and run the following script:
create type t_str_array as table of varchar2(4000)
/
To install these packages, run the following four scripts as your APEX parse-as schema:
/plsql-utils-v170/ora/http_util_pkg.pks
/plsql-utils-v170/ora/http_util_pkg.pkb
/plsql-utils-v170/ora/debug_pkg.pkb
/plsql-utils-v170/ora/debug_pkg.pkb
3) Edit the file amazon_aws_auth_pkg.pkb in a text editor.
4) Near the top of the file are three global variable declarations: g_aws_id, g_aws_key and g_gmt_offset. Set the values of these three variables to the Access Key ID, Secret Key ID and GMT offset. These values were displayed and/or downloaded when you created your AWS user. If you did not record these, you will have to create a new pair back in the User Management dashboard.
g_aws_id varchar2(20) := 'XXXXXXXXXXXXXXXXXXX'; -- AWS Access Key ID
g_aws_key varchar2(40) := 'XXXXXXXXXXXXXXXXXXX'; -- AWS Secret Key
g_gmt_offset number := 4; -- your timezone GMT adjustment (EST = 4, CST = 5, MST = 6, PST = 7)
5) Once the changes to amazon_aws_auth_pkg.pkb are made, save the file.
6) Next, run the following four SQL scripts in the order below as your APEX parse-as schema:
/plsql-utils-v170/ora/amazon_aws_auth_pkg.pks
/plsql-utils-v170/ora/amazon_aws_auth_pkg.pkb
/plsql-utils-v170/ora/amazon_aws_s3_pkg.pks
/plsql-utils-v170/ora/amazon_aws_s3_pkg.pks
IMPORTANT NOTE: The S3 packages in their current form do not offer support for SSL. This is a big deal, since any request that is made to S3 will be done in the clear, putting the contents of your files at risk as they are transferred to and from S3. There is a proposal on the Alexandria Issues Page that details this deficiency.
I have made some minor alterations to the AMAZON_AWS_S3_PKG package which accommodate using SSL and Oracle Wallet when calling S3. You can download it from here. When using this version, there are three additional package variables that need to be altered:
g_orcl_wallet_path constant varchar2(255) := 'file:/path_to_dir_with_oracle_wallet';
g_orcl_wallet_pw constant varchar2(255) := 'Oracle Wallet Password';
g_aws_url_http constant varchar2(255) := 'https://'; -- Set to either http:// or https://
BEGIN
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL
(
acl => 'apex-s3.xml',
description => 'ACL for APEX-S3 to access Amazon S3',
principal => 'APEX_S3',
is_grant => TRUE,
privilege => 'connect'
);
DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL
(
acl => 'apex-s3.xml',
host => '*.amazonaws.com',
lower_port => 80,
upper_port => 80
);
COMMIT;
END;
/
SELECT * FROM table (amazon_aws_s3_pkg.get_object_tab(p_bucket_name => 'apex-s3-integration'))
This query will return all files that are stored in the bucket apex-s3-integration, as shown below:

If you see the file that you previously uploaded, then everything is working as it should!
SELECT
key,
size_bytes,
last_modified,
amazon_aws_s3_pkg.get_download_url
(
p_bucket_name => 'apex-s3-integration',
p_key => key,
p_expiry_date => SYSDATE + 1
) download,
key delete_doc
FROM
table (amazon_aws_s3_pkg.get_object_tab(p_bucket_name => 'apex-s3-integration'))







amazon_aws_s3_pkg.delete_object
(
p_bucket_name => 'apex-s3-integration',
p_key => :P1_KEY
);
Enter P1_KEY for Page Items to Submit and click Create.

FOR x IN (SELECT * FROM wwv_flow_files WHERE name = :P2_DOC)
LOOP
-- Create the file in S3
amazon_aws_s3_pkg.new_object
(
p_bucket_name => 'apex-s3-integration',
p_key => x.filename,
p_object => x.blob_content,
p_content_type => x.mime_type
);
END LOOP;
-- Remove the doc from WWV_FLOW_FILES
DELETE FROM wwv_flow_files WHERE name = :P2_DOC;

Announcing the ORCLAPEX NOVA Meetup Group
Following in the footsteps of a few others, I’m happy to announce the formation and initial meeting of the ORCLAPEX NOVA (Northern Virginia) group!
As Dan McGhan and Doug Gault have mentioned in their blogs, a bunch of us who are regular APEX users are trying to continue to grow the community by providing in-person meetings where we can meet other APEX developers and trade stories, tips and anything else. Each of the groups is independently run by the local organizers, so the formats and topics will vary from group to group, but the core content will always be focused around Oracle APEX. Groups will also be vendor-neutral, meaning that the core purpose of the group is to provide education and facilitate the sharing of APEX-related ideas, not to market services of products.
Right now, there are a number of groups already formed across the world:
- ORCLAPEX-MSP for the Minneapolis, St. Paul area led by Jorge Rimblas
- ORCLAPEX-DFW for the Dallas/Fort Worth area led by Doug Gault
- ORCLAPEX-MTL for the Montreal area of Cadana led by Francis Mignault
- ORCLAPEX-NYC for the New York City area led by Dan McGhan
- ORCLAPEX-VIENNA for the Vienna Austria area led by Peter Raganitsch
I’m happy to announce that the first meeting of the ORCLAPEX NOVA group will be Thursday, May 29th, 2014 at Oracle’s Reston office in Reston, VA at 7:00 PM. Details about the event can be found here. We will start the group with a bang, as Mike Hichwa, VP of Database Development at Oracle, will be presenting APEX 5.0 New Features for the bulk of the meeting. You can guarantee that we’ll get to see the latest and greatest features being prepared for the upcoming APEX 5.0 release. Here’s the rest of the agenda:
7:00 PM Pizza & Sodas; informal chats
7:15 PM Welcome - Scott Spendolini, Enkitec
7:30 PM APEX 5.0 - Mike Hichwa, Oracle Corporation
9:00 PM Wrap Up & Poll for Next MeetUp
IMPORTANT: In order to attend, you must create a MeetUp.com account, join the group and RSVP. You will also have to use your real name, as it will be provided to Oracle Security prior to the event, and if you’re not listed, you may not be able to attend. All communications and announcements will be facilitated via the MeetUp.com site as well.
Also, not all meetings need to be at the Oracle Reston facility; we’re using it because Mike & Shakeeb were able to secure the room for free, and it’s relatively central to Arlington, Fairfax and Loudoun Counties. Part of what we’ll have to figure out is how many smaller, more local groups we may want to form (i.e. PW County, DC, MD, etc.) and whether or not we should try to keep them loosely associated. One thought that I had would be for the smaller groups to meet more locally and frequently, and for all of the groups to seek out presenters for an “all hands” type meeting that we can move around the region. All options are on the table at this point.
I look forward to meeting many of you in person on the 29th!
Working Orange (Updated)
WorkingOrange was created to allow Syracuse alumni from all fields to share what it is they do on a typical day, as well as interact with students, other alumni and anyone else. It is typically active each Tuesday and Thursday, and has hosted alumni from all kinds of industries. For example, today is an elementary school teacher, and this past Tuesday was a producer for a TV station in Philadelphia.
I encourage everyone to follow along here: http://www.twitter.com/workingorange or just follow @WorkingOrange More information on SU's Career Services can be found here: http://careerservices.syr.edu
UPDATE: due to a scheduling issue, I've been reassigned to Thursday, March 6th.
Upcoming Conferences for 2014
It’s that time of year again: Conference Season! There’s a few conferences that fall in the 1st few months of the year that I try to present at each year, and this year is no different. Here’s where I’ll be presenting at over the next few months:
- RMOUG - Denver, CO - February 5th - 7th
At RMOUG this year, I’ll be co-presenting a new session called “Creating a Business UI in APEX” with Jorge Rimblas. I’m very excited about this session, as there is a lot of practical and easy to use information packed into it about user interface design - something most APEX developers have little experience in. I’ll also be a part of the Oracle ACE Lunch & Learn on Friday, so if you want to talk APEX, come and find my table.
- UTOUG - Sandy, UT - March 12th & 13th
This year at UTOUG, I will be presenting “Intro to APEX Security”. Given that APEX 5.0 is out in at least an EA release, I hope to incorporate what’s new in addition to what APEX 4.2 and prior have to offer. - GLOC - Cleveland, OH - May 12th & 13th
I’ll be quite busy at GLOC this year, with at least two sessions: the aforementioned “Intro to APEX Security”, as well as a 3-hour hands-on session entitled “APEX Crash Course”. This session will be aimed at those new or relatively new to APEX, and walk the participants through building a few working applications - both desktop and mobile.
Note: GLOC abstract submission closes this Friday, so there’s still time to submit if you’re interested! - KScope - Seattle, WA - June 22nd - 26th
As usual, KScope will be the busiest conference of the year for me, with 3 sessions, Open Mic Night, Lunch & Learns, booth duty and who knows what else. In addition to “Creating a Business UI in APEX” and “Intro to APEX Security”, I’ll be holding a Deep Dive session on Thursday entitled “APEX Security Deep Dive”. This session will take a more thorough look at the inner workings of APEX’s security, and is meant for those who are comfortable with APEX.
I’m sure as the year goes by, there will be additional conferences: MAOP, VOUG, ECOUG and OOW are all events that I typically attend. Hope to see some of you at one them!
Get on Board the ARC
Yesterday, we launched the new APEX Resource Center - or ARC - on enkitec.com. The ARC was designed to provide the APEX experts at Enkitec with an easy way to share all things APEX with the community. It’s split up into a number of different sections, each of which I’ll describe here:
- What's New
The first page of the ARC will display content from all other sections, sorted by date from newest to oldest. Thus, if you want to see what’s new, simply visit this page and have a look. In the future, we’ll provide a way to be notified anytime anything new is added to any section.
- Demonstrations
The Demonstrations section is perhaps the most interesting. Here, our consultants have put together a number of mini-demonstrations using APEX and a number of other associated technologies. Each demonstration has a working demo, as well as the steps used to create it. Our plan is to keep adding new demonstrations on a weekly basis.
- Events
The Events section is a copy of the Events calendar, but with a focus on only APEX-related events.
- Presentations
Like Events, the Presentations section is a copy of the main Presentations section filtered on only APEX-related presentations.
- Technical Articles
Technical Articles will contain a number of different types of articles. These will usually be a bit longer than what’s in the Demonstrations section, and may from time to time contain an opinion or editorial piece. If you have an idea for a Technical Article, then use the Suggest a Tech Article link to send it our way.
- Plug-ins
If you’re not already aware, Enkitec provides a number of completely free APEX Plug-Ins. This section highlights those, with links to download and associated documentation.
- Webinars
Currently, the Webinars section displays any upcoming webinars. In the near future, we’re going to record both webinar content and presentations, and also make those available here.
We’re going to work hard to keep adding new content to the ARC at least weekly, so be sure to check back frequently. And as always, any feedback or suggestions are always welcome - just drop us a line by using the Contact Us form on enkitec.com.
Abstract Sumission Advice
Yesterday, I was part of the KScope 14 APEX Abstract Review call. This call is used to discuss the rankings that the Abstract Review Committee has previously given each session. Naturally, we use APEX to help with this process - specifically WebSheets. The call allows us to ensure that the selections are as fair as possible. We make sure that no single presenter has too many slots, ensure that there are enough first-timers vs. veteran presenters and keep the topics of the accepted abstracts balanced. This process has been extremely useful in the past, and really makes for a much better conference.
In reviewing the abstracts, I could not help but keep mentally creating a do's and don't list when it comes time to submit an abstract. While most of them were fairly decent, there were a few that were sub-par in relation to the others, and there were a couple that stood out.
Based on this, I've come up with an ad-hoc list of things to consider when submitting an abstract for any conference. It's in no particular order and by no means complete, but I figured that I'd blog this out while it's still fresh on my mind. Here goes:
Catchy Titles
Catchy titles can definitely draw attention to your session. While the title "Intro to APEX & jQuery" clearly spells out what is covered, it's a bit bland. A more creative version would be something like “jQuery & APEX: 10 Must-Know Commands in an Hour”. Be careful not to use too long of a catchy title, as that's one of the sure-fire ways to not get accepted. If you can't spell it out in just a few words, then perhaps the topic needs to be re-thought.
If you're going to use a catchy title, then keep this in mind: you immediately raise the expectations of the reviewers. Nothing is more disappointing than a catchy title followed by a sub-par summary & abstract. So make sure that you spend at least as much time on the abstract as the title itself!
Less is More
Being succinct is key. We had well over 100 abstracts to review, and if your abstract doesn't stand out in the first sentence or two, then chances are the rest of it may get ignored. You’re not writing a book or even a chapter of a book here, so there is no time to build up what you want to say. Simply just say it!
KScope gives you two places to sell your session: Summary & Abstract. A sure-fire way to sink your session is to copy & paste the same text in both of these. They are different, and if you can’t take the time to fill them out correctly, don’t expect much in return. The Summary should be a paragraph or so that sells the session. This is what most reviewers read first. If it’s good & compelling, we’ll read the abstract. If not, then perhaps not.
Consider this example of a presentation summary:
Starting with APEX 4.0, Oracle began to include jQuery bundled with APEX itself. jQuery is an open-source JavaScript library that makes developing easier and faster. This session will cover the basics of jQuery and how it is integrated with APEX. It will also cover some best practices to use when utilizing jQuery from APEX.
Now, consider this one:
Want to learn how to enhance your application’s visual impact without learning any new commands or languages? Then this session is for you! We’ll show you 10 quick & easy ways to utilize jQuery to add some sizzle to your APEX application - all without more than a line of code each!
Clearly the second one just feels more exciting. It asks a question - which acts as a hook for the reviewer. It then spells out pretty clearly what it will cover, and throws in the added benefit of “one line of code each”. It doesn’t waste any time defining jQuery, but rather almost leaves that to the reviewer. If they know what jQuery is, then there is no issue. If they do not, they can either look it up or come to the session to learn more about it.
One note of caution about being succinct: there is such a thing as too succinct. Have a peer or two read your summary and then ask them to describe what they think will be presented. If they are too far off the mark, you may need to add some more content to it.
The abstract is where you’re going to spell out what you highlighted in the summary. Here’s where you can and should get somewhat technical. In the example above, spell out the 10 things that you’re going to cover. This is the only chance that we’ll get to see the outline of your presentation. If you fail to do this, then you’re less likely to get accepted.
Buzzword Bingo
Google it. Now print it out, and read your abstract. Did you get bingo, or even come close? If yes, then you have too many buzzwords. Nothing aggravates me more then reading a sentence, pausing, and wondering just what the heck the point of that sentence was.
Know Your Audience...
There are less than 50 available slots at KScope in the APEX track. While that’s a relatively large number, it’s actually not, especially based on the large number of submissions that we had this year. Throw in things like the Intro track and some of the deep dives, and this number gets even smaller.
Therefore, one of the key criteria that we consider is how wide of an audience will your session appeal to. There’s probably no such thing as too wide (as long as it has to do with APEX), but there is definitely such a thing as too narrow. A topic that covers IRs or Charts or jQuery will have a wide appeal, because we all use those components. Something like mobile will have a narrower, but still wide enough appeal, because many of us use it. However, once you venture into the more obscure corners of APEX, the audience starts to get dangerously narrow, and the likelihood of acceptance goes down as well.
…And Your Audience Should Know You
Speaking at a large conference such as OpenWorld or KScope is something that is earned. Thus, getting accepted may also take a little bit of work. If you did submit an abstract and not get accepted, don’t give up. Rather, try to start establishing a name for yourself. You can do this a number of different names: blogging, Twitter, and presenting at smaller, local conferences, just to name a few. Nothing delights the reviewers more than seeing a name of a popular blogger show up in the KScope APEX track.
The best part about blogging & social media is that everyone starts on the same level. If you start a new blog, your content and content alone will determine how others perceive your understanding of the topics that you blog on. If your posts are very detailed and contain a lot of good information, it’s more likely that people will share them, thus increasing your exposure. If they are not well written and technically incorrect, people will remember that, too.
Summary
While this post is clearly too late to matter for KScope '14, I hope that it can be helpful for any other conference whose submission deadline has yet to pass. Feel free to add your own advice in the comments.
Multi-Colored SQL
To use it, simply create or edit a database connection, and set the Connection Color to whichever color you choose:
Once set, any and all windows associated with that connection will be outlined in that color. That's it! I already gleefully went through my connection list and associated different colors with different types of connections. For example, our development schemas got green:
While our production schemas got red: