As the IT landscape evolves, it's becoming easier for decision-makers to implement innovative new solutions. Daily operations, for instance, are being streamlined with cloud services for increased efficiency and greater customization, and big data organizations are gaining a more intuitive list of digital strategies.
Cloud solutions for building reliable infrastructures
According to InfoWorld, these changes are widespread throughout enterprise IT. Rather than relying on legacy strategies for basic infrastructures, decision-makers are integrating cloud-based strategies into their core systems. In order to make this transition easier for IT teams, cloud services are being deployed as a Software-as-a-Service (SaaS) strategy, which provides business leaders with scalable storage capabilities immediately. Although not every corporation has transferred information to the public cloud, the source noted that as data categorization becomes more sophisticated with unique customizable applications, tech managers are crafting their own digital architectures.
Database administration, for instance, is an app that provides corporations with tools for managing large stores of data. Not only does this solution assist with information recovery and additional security, remote DBA experts can guide new and existing businesses through unfamiliar IT processes.
Uniting legacy strategies with modern services
InformationWeek reported that as cloud services progress, corporations are slowly beginning to replace their old storage platforms with new digital models. Legacy solutions are not, however, vanishing altogether. Hybrid storage strategies are another popular method of enterprise computing that enables IT managers to retain the use of their on-premises solutions alongside the cloud. This strategy is useful for a variety of reasons. Decision-makers who retain sensitive data, such as court documents or medical records, can choose to keep this information in-house and send everything else to a cost-effective public cloud service.
As corporations begin designing new cloud solutions, the available options for unique deployments is increasing. Now that the cloud and SaaS have begun to mature, it's becoming simpler for IT managers to construct easy-to-use digital solutions to outsource data.
RDX supports all major UNIX/LINUX operating systems, including Redhat, IBM, HP-UX and Sun. We offer expertise in highly available architectures, database and SQL Tuning, security and auditing, advanced database features and more. For more information, please visit our OS Services page or contact us.
As storage technologies advance, the cloud market is continuing to provide enterprises with additional customization options. Furthermore, for IT teams transitioning to a cloud-based infrastructure, providers are making it easier to leverage new strategies to perform information maintenance.
Knowing how to find the right service is key
According to InformationWeek, an emphasis on assistance is becoming increasingly important in cloud deployments. Sending data to the cloud can be a challenging task the first-time because of the decisions that IT teams will have to make regarding sensitivity and storage need. In an effort to simplify the process, most cloud providers offer direct assistance. Additionally, the source noted that business leaders should consider the cloud to be a service, rather than a system.
Adjusting the way decision-makers approach cloud services is the first step toward building an efficient digital infrastructure. The cost-effectiveness of scalable storage is one of the hallmarks of cloud computing. As such, the source noted that IT managers should have a specific set of goals in mind, rather than expecting the cloud to be the solution to a data problem. In other words, cloud strategies provide a vehicle for corporations to manage their storage, but seeing returns will require customizing the infrastructure according to the existing needs of the business.
Narrowing down the unique purpose of the cloud
ZDnet referred to business savvy as the ability to flexibly upgrade office technologies as they become available. In order for the cloud to best serve a corporation, for instance, the IT team should endeavor to stay up to date on the kinds of services that can be deployed within a digital architecture. Additionally, corporations should be willing to do a little research before the transition to find which services are capable of providing the most initial support. Remote DBA experts, for example, offer corporations with data categorization assistance and enhanced security.
As businesses make changes to their infrastructures, it's important for IT teams to outline a trajectory for these upgrades. By doing so, the new deployment will be more successful.
RDX offers a full suite of cloud migration and administrative services that can be tailored to meet any customer's needs. To learn more about our full suite of cloud migration and support services, please visit our Cloud DBA Service page or contact us.
Découvrez cette solution en suivant le webcast du 3 décembre : Database Cloning in Minutes using Enterprise Manager 12c Database as a Service Snap Clone
I encourage people that I work with to put a small number like 8 as the parallel degree when they want to create tables or indexes to use parallel query. For example:
SQL> create table test parallel 8 as select * from dba_tables; Table created. SQL> SQL> select degree from user_tables where table_name='TEST'; DEGREE ---------- 8
But frequently I find tables that were created with the default degree by leaving out a number on the parallel clause:
SQL> create table test parallel as select * from dba_tables; Table created. SQL> SQL> select degree from user_tables where table_name='TEST'; DEGREE ---------- DEFAULT
The problem is that on a large RAC system with a lot of CPUs per node the default degree can be a large number. A table with a large degree can cause a single query to eat up all of the available parallel query processes. That’s fine if only one query needs to run at a time but if you plan to run multiple queries in parallel you need to divide up the parallel query processes among them. I.e. if you have 100 parallel query processes and need to run 10 queries at a time then you need to be sure each query only gets 10 of them. I guess degree=5 is 10 processes but the point is that you don’t want to start running a bunch of queries with a degree of 50 each when you have 100 parallel processes to divide up.
With the default settings default degree is 2 X number of cpus X number of RAC nodes. I tested this on an Exadata V2 with 2 nodes and 16 cpus per node. The result was as expected, degree=64:
Final cost for query block SEL$1 (#0) - All Rows Plan: Best join order: 1 Cost: 2.1370 Degree: 64 Card: 4203.0000 Bytes: 54639 Resc: 123.0910 Resc_io: 123.0000 Resc_cpu: 2311650 Resp: 2.1370 Resp_io: 2.1354 Resc_cpu: 40133
Just to verify that a query with parallel 8 would really use degree 8 I ran the same test with the same table but parallel 8:
Final cost for query block SEL$1 (#0) - All Rows Plan: Best join order: 1 Cost: 6.5419 Degree: 8 Card: 4715.0000 Bytes: 61295 Resc: 47.1020 Resc_io: 47.0000 Resc_cpu: 2593250 Resp: 6.5419 Resp_io: 6.5278 Resc_cpu: 360174
Also note the lower cost(2) in the plan with default degree:
--------------------------------------------------+-----------------------------------+-------------------------+ | Id | Operation | Name | Rows | Bytes | Cost | Time | TQ |IN-OUT|PQ Distrib | --------------------------------------------------+-----------------------------------+-------------------------+ | 0 | SELECT STATEMENT | | | | 2 | | | | | | 1 | SORT AGGREGATE | | 1 | 13 | | | | | | | 2 | PX COORDINATOR | | | | | | | | | | 3 | PX SEND QC (RANDOM) | :TQ10000| 1 | 13 | | |:Q1000| P->S |QC (RANDOM)| | 4 | SORT AGGREGATE | | 1 | 13 | | |:Q1000| PCWP | | | 5 | PX BLOCK ITERATOR | | 4203 | 53K | 2 | 00:00:01 |:Q1000| PCWC | | | 6 | TABLE ACCESS STORAGE FULL | TEST | 4203 | 53K | 2 | 00:00:01 |:Q1000| PCWP | | --------------------------------------------------+-----------------------------------+-------------------------+
Compared to degree 8 (cost=7):
--------------------------------------------------+-----------------------------------+-------------------------+ | Id | Operation | Name | Rows | Bytes | Cost | Time | TQ |IN-OUT|PQ Distrib | --------------------------------------------------+-----------------------------------+-------------------------+ | 0 | SELECT STATEMENT | | | | 7 | | | | | | 1 | SORT AGGREGATE | | 1 | 13 | | | | | | | 2 | PX COORDINATOR | | | | | | | | | | 3 | PX SEND QC (RANDOM) | :TQ10000| 1 | 13 | | |:Q1000| P->S |QC (RANDOM)| | 4 | SORT AGGREGATE | | 1 | 13 | | |:Q1000| PCWP | | | 5 | PX BLOCK ITERATOR | | 4715 | 60K | 7 | 00:00:01 |:Q1000| PCWC | | | 6 | TABLE ACCESS STORAGE FULL | TEST | 4715 | 60K | 7 | 00:00:01 |:Q1000| PCWP | | --------------------------------------------------+-----------------------------------+-------------------------+
So, this shows that in this case with 2 node RAC and 16 cpus per node that the optimizer uses degree 64 for default degree and is more likely to choose a full scan over an index scan because the cost of the degree 64 full scan is less than that of a degree 8 full scan.
The key point is to understand that putting the keyword PARALLEL by itself on a table or index creation statement instead of PARALLEL 8 (or 4 or 16) can result in unexpectedly high degree. This high degree can cause performance to degrade by allowing individual queries to eat up the parallel query processes leaving other queries to run inefficiently without the expected parallelism. The high degree also reduces the cost of a full scan potentially causing them to be favored over index scans where the index scan would be more efficient.
p.s. Here is a zip of the scripts and logs that I used to create the 10053 traces: zip
I got my hands on a copy of the new book "Developing Web Application with Oracle ADF Essentials" by Sten Vesterli published by Packt Publishing, so I wanted to post a quick review.
There are already multiple books about Oracle ADF out there, but the unique angle for this book is that it focuses on the free version of Oracle ADF - Oracle ADF Essentials. To increase the "Free" angle it also uses the free MySQL database throughout the book which is a nice touch that differs from the regular approach of using an Oracle database.
The book is around 250 pages long and as such it is not aiming for a very deep technical dive, but rather gives a very good overview of the various layers of ADF Essentials to someone who is new to the world of ADF. It covers the Fusion stack including ADF BC, ADF Faces, ADF Controller and ADF Binding. Going beyond the wizards it also covers the basics of adding code in managed beans and the business components layer.
The book starts with setting up your environment including JDev+MySQL and GlassFish, and then uses this environment to build a simple application that you enhance and add features to while going through the various chapters of the book.
Considering the target audience for ADF Essentials (Java purists), it might have been interesting to include in the book a chapter that describe integration with EJBs and POJOs as a source of data for ADF applications. I guess this is something for the next edition...
The book does address establishing a security layer for your ADF Essentials application - something that you get in regular ADF but you need to manually implement in the Essentials packaging.
Sten also goes beyond the basic intro level to cover some of the topics he also covered in his other book about enterprise ADF Development. He includes chapters about debugging, deployment and library usage and also addresses team development - which is a nice touch.
Overall the book can provide a great introduction to ADF for a developer who is completely new to the framework. The fact that is uses a completely free stack should make it even more attractive. For developers who are looking to leverage the free version of ADF this is right now the only book out there to cover that part of the framework (Glassfish configuration etc).
After you read this book, you'll probably want to dig deeper with some of the more in-depth books about ADF to complete the picture.
What to do
My wallet's gettin thin
And I just lost my watch last night
Well I gotta problem
Just one answer
Gotta throw it all down
And kiss it goodbye
Yeah! That was a crazy game of poker
(That was a crazy game of poker)
I lost it all
(I lost it all)
But someday I'll be back again
And I, never to fold.
(Never to fold) - From O.A.R.’s “That Was A Crazy Game Of Poker”I try not to be too self-centered here, because I’m pretty convinced you don’t want to read that kind of tripe. But it’s been a crazy game of poker for the past few days, so I thought I’d share a little bit on here.
First - LinkedIn Went CrazyI was touching up my LinkedIn profile over the weekend and changed the title of my current job from “Executive Vice President” to “EVP” (more on that in a second). LinkedIn interpreted that as a new job and the congrats have been pouring in since. Digging all the good will, but nothing really happened.
Second - Accolades From ForbesEarlier this week, Forbes designated my humble Twitter feed as one all Oracle users should read. Wow. I’m honored, especially when I look at the other folks on the list. Grateful to be included as part of that crowd. Pretty doggone cool. So, if Forbes has enticed your interest, it’s twitter.com/fteter.
Third - I Resigned From EiS TechnologiesYup, I resigned from my current job at EiS Technologies effective December 31. Now you know why I was tweaking my LinkedIn profile over the weekend ;) And I don't even know here I'm going next. Now why would I do a dumb thing like that?
The cause behind the departure is pretty simple. I had five changes I wanted to mainstream into the business. All successfully done. Made things better. But the owners and I talked recently, and we agreed we see things differently going forward. And while their path may turn out to be great for EiS, it’s not really my cup of tea. We’re all on good terms and will probably be working together in one way or another for some time to come, but it’s time for me to take my bag of skills elsewhere. It’s been a great experience all around. In fact, leaving is bittersweet...really enjoyed working with that team. We all learned a bunch from each other, and I’ve added a quite a bit to my own tool bag. Applying those news tools/skills/experiences in a new setting will be on my mind while I help manage a smooth transition at EiS.
Yeah, it’s been a crazy game of poker over the past few days. Can’t wait to see what’s next!
Why you have to learn about Zookeeper? If you are using application as Hbase, Neo4j, Solr, Accumulo and etc. read...
You can read much more on Apache ZooKeeper website. Anyway... you are looking for a book about Apache ZooKeeper. I mention a book titles ZooKeeper Distributed Process Coordination by Flavio Junqueira, Benjamin Reed.
A book guides readers to use Apache ZooKeeper manages distributed system. In a book has 3 parts - ZooKeeper Concepts and Basics, Programming with ZooKeeper, Administering ZooKeeper.
So, This book is good for readers who are interested in ZooKeepeer and who use applications relate with it. It will help readers to understand more about ZooKeeper Concepts and developer program with ZooKeeper. For me, I like it because it helps me idea to administrate Zookeeper.
A book covers:
- Learn how ZooKeeper solves common coordination tasks
- Explore the ZooKeeper API’s Java and C implementations and how they differ
- Use methods to track and react to ZooKeeper state changes
- Handle failures of the network, application processes, and ZooKeeper itself
- Learn about ZooKeeper’s trickier aspects dealing with concurrency, ordering, and configuration
- Use the Curator high-level interface for connection management
- Become familiar with ZooKeeper internals and administration tools
Written By: Surachart Opun http://surachartopun.com
In the spirit of Thanksgiving this week being celebrated on Thursday in the USA
This post is shared from our Oracle Java Community.
Hinkmond Wong's Weblog
First, we need to test the temperature probe before sticking it into unknown places, namely our delicious IoT bird on Thanksgiving. So, take your Go!Temp USB temperature probe and plug it into your Raspberry Pi device, just like in this photo.
If all went well on your Raspberry Pi, you should be able to bring up a terminal shell connected to your RPi and type "lsusb" to verify that the Go!Temp probe is now connected.
pi@raspberrypi ~ $ lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 0424:9512 Standard Microsystems Corp.
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp.
Bus 001 Device 005: ID 08f7:0002 Vernier EasyTemp/Go!Temp
If your output looks like above, especially the last line where it says the Vernier Go!Temp was recognized and is connected as Device 005, you are golden.
One last check before we start to program using a Java SE Embedded app to grab the temperature readings is to make sure the /dev/ldusb0 device is present. So, type this command and make sure your output matches:
pi@raspberrypi ~ $ ls -l /dev/ldusb0
crw------T 1 root root 180, 176 Nov 18 17:25 /dev/ldusb0
If all that looks good, you're ready for the next step which is to write a Java SE Embedded app to read the temperature values, and eventually write code with IoT intelligence to tweet out the status of your turkey while it's cooking so that it becomes an Internet of Things connected bird on Twitter. Look for that in the next part of this series... Mmmmm... I can almost smell that turkey roasting... " title="" style="border: none;" />
See the full series on the steps to this cool demo:
Internet of Things (IoT) Thanksgiving Special: Turkey Tweeter (Part 1)
The value that Oracle WebCenter brings to organizations has been well-documented but is also multi-faceted. When there are multiple facets to anything, that can also mean additional complexity that can seem daunting until you isolate the specific business challenges that need addressing, prioritization and then taking the time to "draw out" how to best address those challenges in the most cost-effective way possible.
Redstone Content Solutions is a long time Oracle partner and has assisted many organizations with their planning, deployment and technical expertise. Their mission statement is simply to "provide organizations with the tools necessary to securely accumulate and disseminate knowledge". This may include a variety of actual technologies deployed to work together in order to achieve specific business goals, including WebCenter Portal, Sites and Content.
To better illustrate how WebCenter can be used to meet common business objectives, Redstone has created an entertaining short video to show how WebCenter can best be used to create, share, manage and distribute information to the benefit of your business. Take a minute to check it out here and be sure to visit them at http://www.redstonecontentsolutions.com to learn more.
Security on the cloud has been a general topic of discussion since its debut as the new standard in business IT. As these storage services mature, however, the safety of the cloud is being consistently reinforced.
Securing the cloud before sending data away
BizTech reported that new strategies are being developed to help decision-makers secure their information before deployment. Encryption, for instance, enables IT managers to attach a complex code to data before it is outsourced. With this solution, only owners of the decryption key can unlock the information.
The cloud provides corporations with remote access, which is especially useful for decision-makers who maintain a large volume of employees. According to the source, this strategy is also useful for big data corporations who have varying levels of information sensitivity.
In addition to pre-transition security, the source reported hat cloud solutions are enabling IT teams to craft hybrid services that can be launched across existing storage infrastructures. One of the primary concerns associated with new cloud deployments is having to give up direct control over some digital management responsibilities.
Because of the sensitive nature of information in some industries, such as banking institutions and health-related facilities, losing ultimate authority over client records can be daunting for decision-makers, but by integrating an on-premises solution with the cloud, it's possible to retain data in-house.
Using the cloud's agility to reinforce customization
The agility of cloud-based solutions also enhances security. The cloud's enhanced flexibility enables decision-makers to deploy unique and customizable applications across digital architectures. Database administration services, for example, attach to the cloud and provide companies with increased security, more categorization options and more freedom to focus on other tasks.
Along with providing businesses greater protection against cybercrime, remote DBA experts will also help maintain information, allowing decision-makers to explore more options. PandoDaily reported that with a little ingenuity, it's possible to make the cloud more secure than legacy computing methods. For smaller enterprises, especially, the enhanced customization and the access to improved computing features make generating a high-quality network a cost-effective and innovative solution.
As business leaders begin considering their options, it's important for IT managers to recognize how the cloud's security can be used to safeguard the company's digital assets.
More Excel SupportDodeca has always been strong on Excel version support and this version delivers even more Excel functionality. Internally, we use the SpreadsheetGear control, which does a very good job with Excel compatibility. This version of Dodeca integrates a new version of SpreadsheetGear that now has support for 398 Excel functions including the new SUMIFS, COUNTIFS, and CELL functions.Excel Page Setup DialogThe new version of Dodeca includes our implementation of the Excel Page Setup Dialog which makes it easy for users to customize the printing of Dodeca views that are based on Excel templates. Note that for report developers, the Excel Page Setup has also been included in the Dodeca Template Designer.
New PDF View TypeCustomers who use PDF files in their environments will like the new PDF View Type. In previous releases of Dodeca, PDF documents displayed in Dodeca opened in an embedded web browser control. Beginning in this version, Dodeca includes a dedicated PDF View type that uses a specialized PDF control.
View Selector TooltipsFinally, users will like the new View Selector tooltips which optionally display the name and the description of a report as a tooltip.
PerformancePerformance is one of those things that users always appreciate, so we have added a new setting that can significantly improve performance in some circumstances. Dodeca has a well-defined set of configuration objects that are stored on the server and we were even awarded a patent recently for the unique aspects of our metadata design. That being said, depending on how you implement reports and templates, there is the possibility of having many queries issued to the server to check for configuration updates. In a few instances, we saw that optimizing the query traffic could be beneficial, so we have implemented the new CheckForMetadataUpdatesFrequencyPolicy property. This property, which is controlled by the Dodeca administrator, tells Dodeca whether we should check the server for updates before any object is used, as was previously the case, only when a view opens, or only when the Dodeca session begins. We believe the latter case will be very useful when Dodeca is deployed in production as objects configured in production often do not change during the workday and, thus, network traffic can be optimized using this setting. The screenshot below shows where the administrator can control the update frequency.
Though users will like these features, we have put a lot of new things in for the people who create Dodeca views and those who administer the system. Let’s start with something that we think all Dodeca admins will use frequently.Metadata Property Search UtilityAs our customers continue to expand their use of Dodeca, the number of objects they create in the Dodeca environment continues to grow. In fact, we now have customers who have thousands of different objects that they manage in their Dodeca environments. The Metadata Property Search Utility will help these users tremendously.
This utility allows the administrator to enter a search string and locate every object in our system that contains that string. Once a property is located, there is a hyperlink that will navigate to the given object and automatically select the relevant property. This dialog is modeless, which means you can navigate to any of the located items without closing the dialog.
Note: this version does not search the contents of Excel files in the system.Essbase Authentication ServicesIn the past, when administrators wished to use an Essbase Authentication service to validate a login against Essbase and automatically obtain Dodeca roles based on the Essbase user’s group memberships, they had to use an Essbase connection where all users had access to the Essbase application and database. The new ValidateCredentialsOnly property on both of the built-in Essbase Authentication services now flags the service to check login credentials at the server-level only, eliminating the need for users to have access to a specific Essbase database.New Template Designer ToolsPrior to Dodeca 6.x, all template editing was performed directly in Excel. Since that time, however, most template design functionality has been replicated in the Dodeca Template Designer, and we think it is preferable due to the speed and ease of use with which users can update templates stored in the Dodeca repository. We have added a couple of new features to the Template Designer in this version. The first tool is the Group/Ungroup tool that allows designers to easily apply Excel grouping to rows and/or columns within the template. The second new tool is the Freeze/Unfreeze tool that is used to freeze rows and/or columns in place for scrolling.Parameterized SQL Select StatementsSince we introduced the SQLPassthroughDataSet object in the Dodeca 5.x series, we have always supported the idea of tokenized select statements. In other words, the SQL could be written so that point-of-view selections made by users could be used directly in the select statement. In a related fashion, we introduced the concept of parameterized insert, update, and delete statements in the same version. While parameterized statements are similar in concept to tokenized statements, there is one important distinction under the covers. In Dodeca, parameterized statements are parsed and converted into prepared statements that can be used multiple times and results in more efficient use of server resources. The parameterized select statement was introduced in this version of Dodeca in order for customers using certain databases that cache the prepared statement to realize improved server efficiency on their select statements.Workbook Script Formula Editor ImprovementsWe have also been working hard to improve extensibility for developers using Workbook Scripts within Dodeca. In this release, our work focused on the Workbook Script Formula Editor. The first thing we added here is color coding that automatically detects and distinguishes Excel functions, Workbook Script functions, and Dodeca tokens. In the new version, Excel functions are displayed in green, Dodeca functions and parentheses are displayed in blue, and tokens are displayed in ochre. Here is an example.
In addition, we have implemented auto-complete for both Excel and Dodeca functions.
New SQLException EventVersion 6.6 of Dodeca introduces a new SQLException event that provides the ability for application developers to customize the behavior when a SQL Exception is encountered.XCopy Release DirectoryBeginning in version 6.6, the Dodeca Framework installation includes a pre-configured directory intended for customers who prefer to distribute their client via XCopy deployment instead using Microsoft ClickOnce distribution. The XCopy deployment directory is also for use by those customers who use Citrix for deployment.Mac OS X Release DirectoryThe Dodeca Framework installation now includes a pre-compiled Dodeca.app deployment for customers who wish to run the Dodeca Smart Client on Mac OS X operating systems. What that means is that Dodeca now runs on a Mac without the need for any special Windows emulators. Dodeca does not require Excel to run on the Mac (nor does it require Excel to run on Windows for that matter), so you can certainly save your company significant licensing fees by choosing Dodeca for your solution.
In short, you can see we continue to work hard to deliver functionality for Dodeca customers. As always, the Dodeca Release Notes provide detailed explanations of all new and updated Dodeca features. As of today, we have decided to make the Release Notes and other technical documents available for download to non-Dodeca customers. If you are curious about all of the things Dodeca can do, and if you aren't afraid to dig into the details, you can now download our 389 page cumulative Release Notes document from the Dodeca Technical Documents section of our website.
Our new team members, Raymond and Tony, have been busy in their short time with us, and they’re embracing the AppsLab way.
What way is that you ask? Since the beginning, we’ve always started with an idea and moved quickly to build something conceptual to see how and if the idea works.
Connect began life as the IdeaFactory, which Rich (@rmanalan) put together in 24 hours to give life to our idea about enterprise social networking. More recently, Anthony’s (@anthonysali) new toy, the Google Glass, begat the Fusion CRM Glass app.
To be clear, none of this is product. It’s not even really project work, although we do sometimes launch projects based on the initial concept work. This is just smart developers, messing around with ideas, trying to see what works.
Over the years, we’ve built lots of these demos, which I’m calling concept demos lately. Some have evolved into full-scale projects. Others have been moth-balled into our Git repo, which I’m told has something like 40-some odd projects in various states of completeness.
I like to think that code never dies. It just waits around for the right circumstances.
Sorry about that, won’t happen again.
Anyway, with Anthony and Noel (@noelportugal) tied up with travel and other projects, Raymond and Tony have taken the baton and cranked out a couple of cool concept demos.
First, they collaborated to build a working geo-fencing demo. The idea here is that data on a device should be subject to physical location, e.g. patient data in a hospital, customer-sensitive bank data. If the device is within the fence, data exist and can be accessed; when the device leaves the fence perimeter, the data are removed from the device and cannot be retrieved from the server.
Here are some shots of the concept demo at work.
Tony did the groundwork development for this one, and Raymond cleaned it up to demo more cleanly. The toughest part of this one was spoofing the GPS with a fake location to fool it into believing it was inside/outside the geo-fence.
Second, Jeremy, our overlord, owns a Pebble watch, so we’ve been messing about with one for giggles. Possibly as a joke, Jeremy said we should build a watchface app for sales reps that showed “motivational” metrics like days to quarter close and percentage of sales quota achieved.
So, Raymond did that.
I guess the lesson is that it’s not always a good idea to joke around developers.
Why do we do stuff like this?
Aside from proving out ideas, projects like these, the Glass app and the Leap Motion-controlled robot arm allow the guys to go hands-on with the SDKs and APIs of devices we may actually build for in the future. These experiences are incredibly valuable because when it comes time to do a full-scale project, they have a baseline understanding of what we can reasonably do and how easy or difficult it will be.
That experience leads to much better estimates of development times, and it removes some of the uncertainty involved. Oh, and it helps control the scope early in a project, which makes execution and timely delivery achievable.
If you’re counting, that’s a win-win-win-win-win, or something.
Yeah, concept demos are usually rough around the edges, but they’re baked enough to give an idea of what’s possible. Plus, concept demos get done quickly, so ideas can be vetted and move on or be tabled without spending a ton of time and effort, e.g. Raymond and Tony banged out the geo-fencing concept demo in less than two weeks, and Raymond built the Pebble concept in under a week.
And that’s real time. They were doing other things too.Possibly Related Posts:
- Oracle Fusion Glass App
- First 3 days as a Glass Explorer (Day 1)
- We’ve Grown
- Messing around with Glass and Fusion CRM for Kscope 13
- Hot Oracle Applications User Experience News
1. Ventana Research awarded their 2013 Technology Innovation Award for Business Innovation in Human Capital Management to Oracle Fusion HCM.
2. Several recent Fusion HCM Go-Live Announcements:
- American Career College and West Coast University
- CAJ Senior Health Care
- Toshiba Medical Systems
3. Some significant and hard-fought sales wins:
- BT Invests
None of this is really a surprise to me. In fact, the ball is getting rolling a bit early from my POV. Looking at market history, it takes about 5 years from a general availability release of new Oracle packaged enterprise apps before we begin to see success…Oracle plays a long game with new product releases, especially in the enterprise applications world. Remember when Oracle's Steve Miranda continually reminded us that Fusion Applications were more of a journey than an endpoint? GA of strong product + willingness to play the long game + incremental development...this is what Steve meant.
Connecting the dots here, and being aware of increased customer interest in Fusion HCM in the US market, I see this as the beginning of the tipping point for Fusion HCM.
As Oracle continues to emphasize HCM on cloud, a SaaS approach for small and medium enterprises, and a continued effort of incremental development for Fusion HCM, I expect to see an even bigger build of momentum in 2014.
The kettle is beginning to boil!
A while back, I promised some details on the Google Now TV Card I found accidentally.
I was watching TV via an HDTV antenna and happened to pop open Google Now for some reason or another. Now showed me this card:
Freaky, right? I dismissed it, but my curiosity was peaked. So, I did some digging about the TV Card and went back to give it whirl.
The card only works on broadcast TV, which makes sense when you reverse engineer it a little. Google Now knows where you are, and based on that, can determine the shows that are being broadcast. That helps narrow down the possibilities, but even given that information, I found the card a bit tough to trigger.
I did my testing during daytime TV, and it failed to detect the Ellen DeGeneres show and another show I tried. It did finally work for the Fox broadcast of the MLB playoff series between the Tigers and Red Sox.
Here is the card it showed:
If I remember correctly, the announcers were talking about Torii Hunter.
Pretty interesting stuff, not mind-blowing, but interesting. This is a pretty powerful example of what Google wants to do though, which is integrate all it knows about the world and you, a.k.a. its knowledge graph, and provide what it thinks might be useful to you at the moment.
Find the comments.Possibly Related Posts:
- Why Stickers are My New Business Card
- Test Driving Google Wallet
- Competing Innovation in Credit Card Payments?
- The WebCenter Customer Spotlight and OpenWorld Approaches
- Some Light Testing of Google Now
If you hurry, you can watch their episode on Hulu. If you decide to wait, they appear in Season 5 Episode 10. Paul and family are the very first segment, so you won’t have to watch the entire episode, although I did because I’ve never seen Shark Tank. It’s an interesting show.
The premise is simple; companies seeking investment pitch a panel of investors, who, if they’re interested, commit a sum of money in exchange for a stake in the business.
Now for the background. Paul founded this little team back in 2007, along with Rich (@rmanalan) and me. Many of you may know Paul, but you might not know that he and his wife started a little lunchbox business called Yubo in their spare time. I think that was in 2009.
Using their savings, they set out to solve a common problem for families, the lunchbox and the jumble of containers and baggies that go into it. Yubo comes with BPA-free, dishwasher-safe containers that fit snugly inside, along with a reusable cold pack.
Plus, the Yubo’s faceplate is customizable and replaceable. It’s an ingenious product. I bought one for my daughter; she loves it; I’ve known Paul for years, etc. Consider that your disclaimer.
I remember the inception days of Yubo. Paul told me about working with an industrial designer and taking late calls with manufacturers overseas, all in his free time. It all seemed very draining, but like every small business, they soldiered through it because they believed in the idea.
Anyway, it was oddly gratifying for me to see Paul and family on national TV, successfully pitching this panel of luminaries. I can only imagine how elated they felt when they struck a deal.
If you’re wondering, Paul recently left Oracle, again; this time for a small company called Achievers.
Good luck dude. Without you, we wouldn’t be doing cool stuff here.Possibly Related Posts:
- Hans Rosling on the Joy of Stats
- Imitation as Flattery
- Find Paul at the Churchill Club on Tuesday, June 17
- Podcast from Paul’s Panel at the Churchill Club
- Does Technology Make You Happier?
In the spirit of Thanksgiving this week being celebrated on Thursday in the USA
This post is shared from our Oracle Java Community.
Hinkmond Wong's Weblog
If you're vegetarian, don't worry, you can follow along and just run the simulation of the Turkey Tweeter, or better yet, try a tofu version of the Turkey Tweeter.
Here is the parts list:
1 Vernier Go!Temp USB Temperature Probe 1 Uncooked Turkey 1 Raspberry Pi (not Pumpkin Pie) 1 Roll thermal reflective tapeYou can buy the Vernier Go!Temp USB Temperature Probe for $39 from here:http://www.vernier.com/products/sensors/temperature-sensors/go-temp/. And, you can get the thermal reflective tape from any auto parts store. (Don't tell them what you need it for. Say it's for rebuilding your V-8 engine in your Dodge Hemi. Avoids the need for a long explanation and sounds cooler...) " title="" style="border: none;" />
The uncooked turkey can be found in your neighborhood grocery store. But, if you're making a vegetarian Tofurkey, you're on your own... The Java Embedded app will be the same, though (Java is vegan). " title="" style="border: none;" />
So, grab all your parts and come back here for the next part of this project...
This article has been updated on November 26 to include the option regarding downloading the MDS content.
The Meta Data Services (or MDS for short) of Oracle's SOA/BPM Suite is used to manage various types of artifacts like:
- Process models created with Process Composer,
- Abstract WSDL's and XSD's,
- Domain Value Map's (DVM), and even
- Artifacts of deployed composites.
To create an MDS connection go to the Resource Palette -> New Connection -> SOA-MDS. This will pop-up a tab from which you can create a database connection to the MDS for example the dev_mds schema. Having created the database connection you have to choose the partition to use for the SOA-MDS connection. To be able to check-out processes created whith Composer from the MDS or to save them in the MDS, you create a SOA-MDS that uses the obpm partition. As the name already suggests, this is in BPM-specific partition. To browse the other artifacts I mention above, you use the soa-infra partion, which is shared by both SOA and BPM.
In the figure below you can see two types of connections, above to the soa-infra and below to the obpm partition. In the (soa-infra) apps you can find the reusable artifacts that you have deployed explicitly (like abstract WSDL's, XSD's, EDL's).
What you also see is a deployed-composites folder that shows all composites that have been deployed. When expanding a composite, you will find that all artifacts are shown. This is a much easier way to verify that you do not deploy too many artifacts to the server then by introspecting the SAR file, I would say. Except for .bpmn files (that at the time of writing are not yet recognized by this MDS browser) you can open all plain text files in JDeveloper.
Downloading the MDS from Enterprise ManagerNow let's assume that you have not been given access to the MDS's DB schema on the environment (perhaps because it is Production), but you do have access to the Enterprise Manager. For this situation my dear colleague Subhashini Gumpula pointed me to the possibility to download the content from the MDS as follows:
soa-infra -> Adminstration -> MDS Configuration -> and then on the right side of the screen: Export.
This will download a soa-infra_metadata.zip file with its content!
Looking up Artifacts in the MDS Using a BrowserNow let's assume that you also have not been given access to Enterprise Manager on the environment, but you can access using the HTTP protocol. Thanks to my dear colleague Luc Gorrisen I recently learned that you can browse it using part of the URL of the composite, as follows:
For example, to look up the abstract WSDL of some ApplicationService that is used by some StudentRegistration business process, I can use the following URL.
Mind you, this is not restricted to only the WSDL's it is using.
Ain't that cool?!
It’s fair to say that Purdue University has sparked several important conversations in ed tech through their work on Course Signals. First, they pretty much put the retention early warning system as a product category on the map, conducting ground-breaking research and building a system that several major ed tech players have either licensed or imitated. More recently, they have sparked a conversation about the state of ed tech research and peer review as their more recent research has been called into question. I highly recommend reading the comment threads on these two posts to get a sense of that conversation.
Now I think Purdue may spark a third conversation—this time around the ethics of institutional learning analytics research and commercialization. Because there is no question in my mind that they have a serious ethical problem on their hands.
While I have no proof that Purdue is aware of the concerns that have been raised about the Course Signals research, I think it highly unlikely that they are unaware, after articles have been published in Inside Higher Ed and the Times Higher Education. The questions have been out for a month now, and so far we have nothing in the way of an official response from the university.
That’s a big problem for several reasons. First, has have been mentioned here before, Purdue has licensed its technology to Ellucian for sale to other schools. In other words, the university is effectively making money on the strength of research claims that have now been called into question. Second, the people who conducted and published the research are not tenured faculty but non-tenurable staff, and they did so using institutional data the access to which Purdue ostensibly controls. It seems overwhelmingly likely that the researchers whose work is being challenged are effectively powerless to respond without permission and support from their institution. If so, then these people are being put in a terrible position. They are listed as the authors of the research, but they do not have the power that an academic Principal Investigator would have to be properly accountable for the work.
For both of these reasons, I believe that Purdue has an ethical obligation as an institution to respond to the criticism. Since they seem disinclined (or at least slow) to do so of their own accord, perhaps some appropriate pressure can be brought to bear. If you are an Ellucian customer, I urge you to contact them and ask why there has not been an official response to the challenge regarding the research. Both of the partners here should know that their brand reputations and therefore future revenue streams are at stake here. (I would be grateful if you would let me know, either publicly or privately, if you take this step. I would like to keep track of the pressure that is being brought to bear. I will keep your name and that of your institution private if you want me to.)
But I also think there is a broader conversation that needs to happen about the general problem. On the one hand, schools have an obligation to protect the privacy of their students. This makes releasing student success research data challenging. On the other hand, if the research cannot be properly peer reviewed because it cannot be shared, then we cannot develop confidence in the research that is coming to us. This problem is exacerbated when research is conducted by staff whose independence is not protected, and by the increasing tendency of institutions to commercialize their educational technology research and development work. There needs to be a community-developed framework to help facilitate the safe and appropriate sharing of the data so institutions can be held accountable for their research and the staff who conduct that research can be appropriately protected.
How can you conditionally turn cells borders on and off in Publishers RTF/XSLFO templates? With a little digging you'll find what appears to be the appropriate attributes to update in your template. You would logically come up with using the various border styling options:
border-top|bottom|left|right-width border-top|bottom|left|right-style border-top|bottom|left|right-color
Buuuut, that doesnt work. Updating them individually does not make a difference to the output. Not sure why and I will ask but for now here's the solution. Use the compound border formatter border-top|bottom|left|right. This takes the form ' border-bottom="0.5pt solid #000000". You set all three options at once rather than individually. In a BIP template you use:
<?if:DEPT='Accounting'?> <?attribute@incontext:border-bottom;'3.0pt solid #000000'?> <?attribute@incontext:border-top;'3.0pt solid #000000'?> <?attribute@incontext:border-left;'3.0pt solid #000000'?> <?attribute@incontext:border-right;'3.0pt solid #000000'?> <?end if?>
3pt borders is a little excessive but you get the idea. This approach can be used with the if@row option too to get the complete row borders to update. If your template will need to be run in left to right languages e.g. Arabic or Hebrew, then you will need to use start and end in place of left and right.
For the inquisitive reader, you're maybe wondering how, did this guy know that? And why the heck is this not in the user docs?
Other than my all knowing BIP guru status ;0) I hit the web for info on XSLFO cell border attributes and then the Template Builder for Word. Particularly the export option; I generated the XSLFO output from a test RTF template and took a look at the attributes. Then I started trying stuff out, Im a hacker and proud me! For the users doc updates, I'll log a request for an update.