Feed aggregator

Oracle Cloud – Glassfish Heap Memory Issues

John Scott - Fri, 2016-04-08 10:35

I recently encountered some issues with our Oracle Cloud Glassfish server encountering out of memory issues.

This manifested itself by Glassfish becoming unrepsonsive and eventually crashing, digging into the logfile we found entries like this

server.log_2016-04-08T18-32-27:[#|2016-04-04T18:21:56.472+0000|SEVERE|oracle-glassfish3.1.2|null|_ThreadID=74;_ThreadName=Thread-2;|Java heap space
server.log_2016-04-08T18-32-27:java.lang.OutOfMemoryError: Java heap space

After a lot of Googling, I found that you can find out the current Java settings that are being used by Glassfish by running the list-jvm-options command, like this:

[oracle@prod bin]$ ./asadmin list-jvm-options
Enter admin user name> admin
Enter admin password for user "admin">
-XX:MaxPermSize=192m
-XX:PermSize=64m
-client
-Djava.awt.headless=true
-Djavax.management.builder.initial=com.sun.enterprise.v3.admin.AppServerMBeanServerBuilder
-XX: UnlockDiagnosticVMOptions
-Djava.endorsed.dirs=${com.sun.aas.installRoot}/modules/endorsed${path.separator}${com.sun.aas.installRoot}/lib/endorsed
-Djava.security.policy=${com.sun.aas.instanceRoot}/config/server.policy
-Djava.security.auth.login.config=${com.sun.aas.instanceRoot}/config/login.conf
-Dcom.sun.enterprise.security.httpsOutboundKeyAlias=s1as
-Djavax.net.ssl.keyStore=${com.sun.aas.instanceRoot}/config/keystore.jks
-Djavax.net.ssl.trustStore=${com.sun.aas.instanceRoot}/config/cacerts.jks
-Djava.ext.dirs=${com.sun.aas.javaRoot}/lib/ext${path.separator}${com.sun.aas.javaRoot}/jre/lib/ext${path.separator}${com.sun.aas.instanceRoot}/lib/ext
-Djdbc.drivers=org.apache.derby.jdbc.ClientDriver
-DANTLR_USE_DIRECT_CLASS_LOADING=true
-Dcom.sun.enterprise.config.config_environment_factory_class=com.sun.enterprise.config.serverbeans.AppserverConfigEnvironmentFactory
-Dosgi.shell.telnet.port=6666
-Dosgi.shell.telnet.maxconn=1
-Dosgi.shell.telnet.ip=127.0.0.1
-Dgosh.args=--nointeractive
-Dfelix.fileinstall.dir=${com.sun.aas.installRoot}/modules/autostart/
-Dfelix.fileinstall.poll=5000
-Dfelix.fileinstall.log.level=2
-Dfelix.fileinstall.bundles.new.start=true
-Dfelix.fileinstall.bundles.startTransient=true
-Dfelix.fileinstall.disableConfigSave=false
-XX:NewRatio=2
-Xmx128m
Command list-jvm-options executed successfully.

You can see a lot of info here, the Heap memory parameter is the Xmx one, which we can adjust by deleting the current setting:

./asadmin delete-jvm-options --target server-config -- '-Xmx128m'

and then assigning a new value

./asadmin delete-jvm-options --target server-config -- '-Xmx256m'

 

Then we restarted the Glassfish server and haven’t seen the issue occur since.
It’s important to not just blindly choose a value here, you need to understand why you’re running out of heap memory and not just increase it for the sake of it (but that’s a post for another day).

 


Installing Sample Databases To Get Started In Microsoft SQL Server

Pythian Group - Fri, 2016-04-08 09:20

Anyone interested in getting started in SQL server will need some databases to work with/on. This article hopes to help the new and future DBA/Developer get started with a few databases.

There are several places to get sample databases, but one starting out in SQL server should go to the Microsoft sample databases. The reason for this is that there are thousands of Blogs/Tutorials on the internet that use these databases as the basis for the tutorial.

The below steps will detail how to get the sample databases and how to attach them to SQL server to start working with them.

This blog assumes you have a version of SQL server installed if not you can click here for a great tutorial

These 2 Databases are a great start to learning SQL Server from both a Transactional and Data Warehousing point of view.

  • Now that we have downloaded these 2 files we will need to attach them one at a time. First, open SQL Server and connect to your instance.
  • Expand the object explorer tree until you can right click on the folder called databases and then left click on Attach…

ObjectExplorer

  • Click the Add Button and navigate to and select the .mdf file (This is the database file you downloaded).

AtatchDatabase1

  • There is one step a lot of people getting started in SQL server often miss. As we have just attached a data file in order for SQL Server to bring the database online, it needs a log file which we don’t have. The trick to this is, if we remove the log file from the Attach Database window, SQL Server will automatically create a log file for us. To do this, simply select the log file and click remove.

LogRemove

  • Finally, when you window looks like below simply click ok to attach the database.

FinalAtatch

  • Repeat steps 3 to 6 for the second database file and any others you wish to attach.
  • The Databases are now online in SQL server and ready to be used.

FinalObjectTree

And that’s it! You now have an OLTP database and a DW database for BI.

Below are links to some good starting tutorials and some additional databases.

Databases

CodePlex

Stack Overflow

Tutorials

W3Schools

Enjoy!

Categories: DBA Blogs

Trello is my new knowledge base

Tony Andrews - Fri, 2016-04-08 08:53
How often do you hit an issue in development and think "I know I've had this problem before, but what's the solution?"  Most days if you've been around a long time like me.  It could be "how do you create a transparent icon", or "what causes this Javascript error in an APEX page".  So you can spend a while Googling and sifting through potential solutions that you vaguely remember having seen Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com4http://tonyandrews.blogspot.com/2016/04/trello-is-my-new-knowledge-base.html

security diversion before going up the stack

Pat Shuff - Fri, 2016-04-08 02:07
This entry is going to be more of a Linux tutorial than a cloud discussion but it is relevant. One of the questions and issues that admins are faced is creation and deletion of accounts. With cloud access being something relatively new the last thing that you want is to generate a password with telnet access to a server in the cloud. Telnet is inherently insecure and any script kiddy with a desire to break into accounts can run ettercat and look for clear text passwords flying across an open wifi or wired internet connection. What you really want to do is login via secure ssh or secure putty is you are on windows. This is done with a public/private key exchange.

There are many good explanations of ssh key exchange, generating ssh keys, and using ssh keys. My favorite is a digitalocean.com writeup. The net, net of the writeup is that you generate a public and private key using ssh-keygen or putty-gen and upload the public file to the ~user/.ssh/authorized_keys location for that user. The following scripts should work on an Azure, Amazon, and Oracle Linux instance created in the compute shapes. The idea is that we initially created a virtual machine with the cloud vendor and the account that we created with the VM is not our end user but our cloud administrator. The next level of security is to create a new user and give them permissions to execute what they want to execute on this machine. For example, in the Oracle Database as a Service images there are two users created by default; oracle and opc. The oracle user has the rights to execute everything related to sqlplus, access the file systems where the database and backups are located, and everything related to the ora user. The opc user has sudo rights so that they can execute root scripts, add software packages, apply patches, and other things. The two users have different access rights and administration privileges. In this blog we are going to look at creating a third user so that we can have someone like a backup administrator login and copy backups to tape or a disk at another data center. To do this you need to execute the following instructions.

sudo useradd backupadmin -g dba
sudo mkdir ~backupadmin/.ssh
sudo cp ~oracle/.ssh/authorized_keys ~backupadmin/.ssh
sudo chown -R backupadmin:dba ~backupadmin
sudo chmod 700 ~backupadmin/.ssh

Let's walk through what we did. First we create a new user called backupadmin. We add this user to the dba group so that they can perform dba functions that are given to the dba group. If the oracle user is part of a different group then they need to be added to that group and not the dba group. Next we create a hidden directory in the backupadmin directory called .ssh. The dot in front of the file denotes that we don't want this listed with the typical ls command. The sshd program will by default look in this directory for authorized keys and known hosts. Next we copy a known authorized_keys file into the new backupadmin .ssh directory so that we can present a private key to the operating system as the backupadmin to login. The last two commands are setting the ownership and permissions on the new .ssh directory and all files under it so that backupadmin can read and write this directory and no one else can. The chown sets ownership to backupadmin and the -R says do everything from that directory down to the same ownership. While we are doing this we also set the group permissions on all files to the group dba. The final command sets permissions on the .ssh directory to read, write, and execute for the owner of the directory only. The zeros remove permissions for the group and world.

In our example we are going to show how to access a Linux server from Azure and modify the permissions. First we go to the portal.azure.com site and login. We then look at the virtual machines that we have created and access the Linux VM that we want to change permissions for. When we created the initial virtual machine we selected ssh access and uploaded a public key. In this example we created the account pshuff as the initial login. This account is created automatically for us and is given sudo rights. This would be our cloud admin account. We present the same ssh keys for all virtual machines that we create and can copy these keys or upload other keys for other users. Best practice would be to upload new keys and not replicate the cloud admin keys to new users as we showed above.

From the portal we get the ip address of the Linux server. In this example it is 13.92.235.160. We open up putty from Windows, load the 2016.ppk key that corresponds to the 2016.pub key that we initialized the pshuff account with. When asked for a user to authenticate with we login as pshuff. If this were an Oracle Compute Service instance we would login as opc since this is the default account created and we want sudo access. To login as backupadmin we open putty and load the ppk associated with this account.

When asked for what account to login as we type in backupadmin and can connect to the Linux system using the public/private key that we initialized.

If we examine the public key it is a series of randomly generated text values. To revoke the users access to the system we change the authorized_keys file to a different key. The pub file looks like

if we open it in wordpad on Windows. This is the file that we uploaded when we created the virtual machine.

To deny access to backupadmin (in the case of someone leaving the organization or moving to another group) all we have to do is edit the authorized_keys file as root and delete this public key. We can insert a different key with a copy and paste operation allowing us to rotate keys. Commercial software like key vaults and key management systems allow you to do this from a central control point and update/rotate keys on a regular basis.

In summary, best practices are to upload a key per user and rotate them on a regular basis. Accounts should be created with ssh keys and not password access. Rather than copying the keys from an existing account it would be an upload and an edit. Access can be revoked by the root user by removing the keys or from an automated key management system.

Links for 2016-04-07 [del.icio.us]

Categories: DBA Blogs

Five Reasons to Attend Oracle’s Modern Service Experience 2016

Linda Fishman Hoyle - Thu, 2016-04-07 13:38

A Guest Post by Christine Skalkotos, Oracle Program Management (pictured left)

Oracle’s Modern Service Experience 2016 is again lighting up fabulous Las Vegas April 26-28, and we’re betting this will be our best event yet. From the speaker lineup and session catalog to the networking experiences and Customer Appreciation Event, we’re going “all in,” and we hope you’ll join us. Here are five reasons you should head to Las Vegas this April for the Modern Service Experience:

1. In-Depth Service Content

The Modern Service Experience features more than 40 sessions led by customer service experts, analysts, and top brands. Through the keynotes, general sessions and breakouts, you’ll hear about current and future trends in customer service and will walk away inspired and ready to turn your insights into actions. Take a look at the just-launched conference program to see the impressive speaker lineup.

The conference program features content for everyone regardless of your role. Attend sessions in the following tracks:

  • Cross-Channel Contact Center
  • Executive
  • Field Service Management
  • Industry
  • Knowledge
  • Oracle Policy Management
  • Platform
  • Web Customer Service
  • Customer Experience

In addition, you’ll hear about Oracle Service Cloud’s vision and product roadmap. Within the breakouts, you’ll learn about new product functionality and how to get the most out of your implementation. In the expo hall, you’ll have the opportunity to participate in interactive demos.

2. One-of-a-Kind Networking

In addition to hearing best practices and soaking up insights from session and keynote speakers, some of the best information you’ll gather at the Modern Service Experience will come from your peers. Customer service leaders from some of the world’s top brands are attending the Modern Service Experience. The conference provides many opportunities to network with peers, as well as with Oracle product experts, sales, executives, and partners.

Before you head to Las Vegas, see who else is attending and start broadening your network through social media. Use the hashtag #ServiceX16, and join the conversation.

3. Thought Leaders & Inspiring Speakers

Attend the Modern Service Experience to hear from some of the leading minds in modern customer service. The featured speaker lineup includes:

  • Mark Hurd, CEO, Oracle
  • Jean-Claude Porretti, Customer Care Worldwide Manager, Peugeot Citroën
  • Scott McBain, Manager, Application Development, Overhead Door Corporation
  • Sara Knetzger, Applications Administrator, Corporate Applications, WageWorks
  • Ian Jacobs, Senior Analyst Serving Application Development & Delivery Professionals, Forrester Research
  • Kate Leggett, VP, Principal Analyst Serving Application Development & Delivery Professionals, Forrester Research
  • Ray Wang, Principal Analyst, Founder, and Chairman, Constellation Research, Inc.
  • Denis Pombriant, founder, managing principal, Beagle Research

4. More Opportunities for Increasing Your Knowledge

First, take advantage of our pre-conference workshops. You’ll probably have to roll the dice to decide which of the three you’ll attend: Get Prepared for the Knowledge-Centered Support (KCS) Practices v5 Certification, Head off to the Races with Agent Desktop Automation, and Step off the Beaten Path with Oracle Service Cloud Reporting.

Next, schedule time with an Oracle Service Cloud mastermind and get answers to your burning questions as part of the Ask the Experts program (sponsored by Oracle Gold Partner Helix).

Last, connect with your peers during lunch and participate in our birds of a feather program around popular topics.

5. Celebrate with Your Fellow Customers

To show our appreciation for our customers, we’re hosting a night of food, drinks, and amazing entertainment. Goo Goo Dolls will play a private concert for attendees at the MGM Grand Arena on Wednesday evening. The Oracle Customer Appreciation Event rarely disappoints—don’t miss it. 

Finally, at 1 p.m. on Thursday April 26, during our annual awards ceremony, we’ll recognize leading organizations and individuals in the customer service space, highlighting their impressive stories about innovation and differentiation. Guaranteed, you’ll leave motivated and energized.

What did last year’s customers have to say?

"Oracle Modern Service Experience 2015 was a top-notch event that provided me with the opportunity to learn about new Oracle Service Cloud capabilities and connected me with federal and private sector peers who have since influenced my direction as the Air Force Reserve's Chief Digital Officer, enabling me to drive the organization to a new level of innovation and efficiency this past year." – Lt Col Michael Ortiz, HQ Air Reserve Personnel Center

"The Modern Service Experience is a must for customers looking to maximize their effectiveness with Oracle Service Cloud." – Michael Morris, Match.com

See you in Las Vegas!

Oracle Live SQL: Explain Plan

Pythian Group - Thu, 2016-04-07 13:14

We’ve all encountered a situation when you want to check a simple query or syntax for your SQL and don’t have a database around. Of course, most of us have at least a virtual machine for that, but it takes time to fire it up, and if you work from battery, it can leave you without power pretty quickly. Some time ago, Oracle began to offer a new service called “Oracle Live SQL” . It provides you with the ability to test a sql query, procedure or function, and have a code library containing a lot of examples and scripts. Additionally, you can store your own private scripts to re-execute them later. It’s a really great online tool, but it lacks some features. I’ve tried to check the  execution plan for my query but, unfortunately, it didn’t work:

explain plan for 
select * from test_tab_1 where pk_id<10;

ORA-02402: PLAN_TABLE not found

So, what could we do to make it work? The workaround is not perfect, but it works and can be used in some cases. We need to create our own plan table using script from an installed Oracle database home $ORACLE_HOME/rdbms/admin/utlxplan.sql. We can open the file and copy the statement to create plan table to SQL worksheet in the Live SQL. And you can save the script in Live SQL code library, and make it private to reuse it later because you will need to recreate the table every time when you login to your environment again. So far so good. Is it enough? Let’s check.

explain plan for 
select * from test_tab_1 where pk_id<10;

Statement processed.

select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
ERROR: an uncaught error in function display has happened; please contact Oracle support
       Please provide also a DMP file of the used plan table PLAN_TABLE
       ORA-00904: DBMS_XPLAN_TYPE_TABLE: invalid identifier


Ok, the package doesn’t work. I tried to create the types in my schema but it didn’t work. So far the dbms_xplan is not going to work for us and we have to request the information directly from our plan table. It is maybe not so convenient, but it give us enough and, don’t forget, you can save your script and just reuse it later. You don’t need to memorize the queries. Here is a simple example of how to get information about your last executed query from the plan table:

SELECT parent_id,id, operation,plan_id,operation,options,object_name,object_type,cardinality,cost from plan_table where plan_id in (select max(plan_id) from plan_table) order by 2;

PARENT_ID	ID	OPERATION	PLAN_ID	OPERATION	OPTIONS	OBJECT_NAME	OBJECT_TYPE	CARDINALITY	COST
 - 	0	SELECT STATEMENT	268	SELECT STATEMENT	 - 	 - 	 - 	9	49
0	1	TABLE ACCESS	268	TABLE ACCESS	FULL	TEST_TAB_1	TABLE	9	49

[/lang] 
I tried a hierarchical query but didn't find it too useful in the Live SQL environment. Also you may want to put unique identifier for your query to more easily find it in the plan_table. 

explain plan set statement_id='123qwerty' into plan_table for
select * from test_tab_1 where pk_id<10;

SELECT parent_id,id, operation,plan_id,operation,options,object_name,object_type,cardinality,cost from plan_table where statement_id='123qwerty' order by id;

PARENT_ID	ID	OPERATION	PLAN_ID	OPERATION	OPTIONS	OBJECT_NAME	OBJECT_TYPE	CARDINALITY	COST
 - 	0	SELECT STATEMENT	272	SELECT STATEMENT	 - 	 - 	 - 	9	3
0	1	TABLE ACCESS	272	TABLE ACCESS	BY INDEX ROWID BATCHED	TEST_TAB_1	TABLE	9	3
1	2	INDEX	272	INDEX	RANGE SCAN	TEST_TAB_1_PK	INDEX	9	2

Now I have my plan_table script and query saved in the Live SQL and reuse them when I want to check the plan for my query. I posted the feedback about the ability to use dbms_xplan and Oracle representative replied to me promptly and assured they are already working implementing dbms_xplan feature and many others including ability to run only selected SQL statement in the SQL worksheet (like we do it in SQLdeveloper). It sounds really good and promising and is going to make the service even better. Stay tuned.

Categories: DBA Blogs

Which Cassandra version should you use for production?

Pythian Group - Thu, 2016-04-07 12:47
What version for a Production Cassandra Cluster?

tl;dr; Latest Cassandra 2.1.x

Long version:

A while ago, Eventbrite wrote:
“You should not deploy a Cassandra version X.Y.Z to production where Z <= 5.” (Full post).

And, in general, it is still valid up until today! Why “in general“? That post is old, and Cassandra has moved a lot since them. So we can get a different set of sentences:

Just for the ones that don’t want follow the links, and still pick 3.x for production use, read this:

“Under normal conditions, we will NOT release 3.x.y stability releases for x > 0.  That is, we will have a traditional 3.0.y stability series, but the odd-numbered bugfix-only releases will fill that role for the tick-tock series — recognizing that occasionally we will need to be flexible enough to release an emergency fix in the case of a critical bug or security vulnerability.

We do recognize that it will take some time for tick-tock releases to deliver production-level stability, which is why we will continue to deliver 2.2.y and 3.0.y bugfix releases.  (But if we do demonstrate that tick-tock can deliver the stability we want, there will be no need for a 4.0.y bugfix series, only 4.x tick-tock.)”

What about end of life?

Well, it is about stability, there are still a lot of clusters out there running 1.x and 2.0.x. And since it is an open source software, you can always search in the community or even contribute.

If you still have doubts about which version, you can always contact us!

Categories: DBA Blogs

Oracle's Modern Finance Experience Blows into Chicago This Week

Linda Fishman Hoyle - Thu, 2016-04-07 12:17

An all-star cast will be speaking at the  in Modern Finance Experience in Chicago this week (April 6-7), including journalist and best-selling author Michael Lewis and Oracle CEOs Safra Catz and Mark Hurd. The theme of the conference is Creating Value in the Digital Age.

In this OracleVoice article leading up to the event, Oracle VP Karen dela Torre explains why 10- or 20-year-old systems are ill suited for the digital economy. She then lists 15 reasons why now is the time for finance to move to the cloud. Here are just a few:

  • New business models require new capabilities (i.e. KPIs, data models, sentiment analysis)
  • Subscription billing and revenue recognition standards require new functionality
  • Rapid growth requires systems that can quickly scale
  • Consolidation, standardization, and rationalization is easier on the cloud

Even to risk-averse finance executives, the call for change will be hard to ignore.

Oracle writer Margaret Harrist also writes about the digital age in a Forbes article that focuses on the not-so-well-known role of finance in the customer experience. Matt Stirrup, Oracle VP of Finance, states that leading finance organizations are looking at the business from the customer’s perspective and recommending changes to the business model or performance measures. Finance may just be the secret sauce to winning in the digital economy.

ADF BC View Criteria Query Execution Mode = Both

Andrejus Baranovski - Thu, 2016-04-07 10:55
View Criteria is set to execute in Database mode by default. There is option to change execution mode to Both. This would execute query and fetch results from database and from memory.  Such query execution is useful, when we want to include newly created (but not commited yet) row into View Criteria result. Newly created row will be included into View Criteria resultset.

Download sample application - ViewCriteriaModeApp.zip. JobsView in sample application is set with query execution mode for View Criteria to Both:


I'm using JobsView in EmployeesView through View Accessor. If data from another VO is required, you can fetch it through View Accessor. View Accessor is configured with View Criteria, this means it will be automatically filtered (we only need to set Bind Variable value):


Employees VO contains custom method, where View Accessor is referenced. I'm creating new row and executing query with bind variable (primary key for newly created row). View Criteria is set to execution mode Both, this allows to retrieve newly created row (not commited yet) after search:


View Criteria execution mode Both is useful, when we want to search without loosing newly created rows.

Two Amazing Men Discovered Evolution by Natural Selection!

FeuerThoughts - Thu, 2016-04-07 10:09
Most everyone knows about Darwin, and what they think they know is that Charles Darwin is the discoverer of Evolution through Natural Selection. And for sure, he did discover this. But the amazing thing is....he wasn't the only one. And whereas Darwin came to this theory pretty much as a Big Data Scientist over a long period of time (mostly via "armchair" collection of data from scientists and naturalists around the world), The Other Guy developed his theory of Natural Selection very much in the field - more specifically, in the jungle, surrounded by the living evidence. 

His name is Alfred Russel Wallace, he is one of my heroes, and I offer below the "real story" for your reading pleasure. 

One of the things I really love about this story is the way Darwin and Wallace respected each other, and did right by each other. We all have a lot to learn from their integrity and compassion.

Alfred Russel Wallace and Natural Selection: the Real Story 
By Dr George Beccaloni, Director of the Wallace Correspondence Project, March 2013

http://downloads.bbc.co.uk/tv/junglehero/alfred-wallace-biography.pdf

Alfred Russel Wallace OM, LLD, DCL, FRS, FLS was born near Usk, Monmouthshire, England (now part of Wales) on January 8th, 1823. Serious family financial problems forced him to leave school aged only fourteen and a few months later he took a job as a trainee land surveyor with his elder brother William. This work involved extensive trekking through the English and Welsh countryside and it was then that his interest in natural history developed.

Whilst living in Neath, Wales, in 1845 Wallace read Robert Chambers' extremely popular and anonymously published book Vestiges of the Natural History of Creation and became fascinated by the controversial idea that living things had evolved from earlier forms. So interested in the subject did he become that he suggested to his close friend Henry Walter Bates that they travel to the Amazon to collect and study animals and plants, with the goal of understanding how evolutionary change takes place. They left for Brazil in April 1848, but although Wallace made many important discoveries during his four years in the Amazon Basin, he did not manage to solve the great ‘mystery of mysteries’ of how evolution works.

Wallace returned to England in October 1852, after surviving a disastrous shipwreck which destroyed all the thousands of natural history specimens he had painstakingly collected during the last two and most interesting years of his trip. Undaunted, in 1854 he set off on another expedition, this time to the Malay Archipelago (Singapore, Malaysia and Indonesia), where he would spend eight years travelling, collecting, writing, and thinking about evolution. He visited every important island in the archipelago and sent back 110,000 insects, 7,500 shells, 8,050 bird skins, and 410 mammal and reptile specimens, including probably more than five thousand species new to science.

In Sarawak, Borneo, in February 1855, Wallace produced one of the most important papers written about evolution up until that time1. In it he proposed a ‘law’ which stated that "Every species has come into existence coincident both in time and space with a pre-existing closely allied species". He described the affinities (relationships) between species as being “...as intricate as the twigs of a gnarled oak or the vascular system of the human body” with “...the stem and main branches being represented by extinct species...” and the “...vast mass of limbs and boughs and minute twigs and scattered leaves...” living species. The eminent geologist and creationist Charles Lyell was so struck by Wallace’s paper that in November 1855, soon after reading it, he began a ‘species notebook’ in which he started to contemplate the possibility of evolution for the first time.

In April 1856 Lyell visited Charles Darwin at Down House in Kent, and Darwin confided that for the past twenty years he had been secretly working on a theory (natural selection) which neatly explained how evolutionary change takes place. Not long afterwards, Lyell sent Darwin a letter urging him to publish before someone beat him to it (he probably had Wallace in mind), so in May 1856, Darwin, heeding this advice, began to write a ‘sketch’ of his ideas for publication.

Finding this unsatisfactory, Darwin abandoned it in about October 1856 and instead began working on an extensive book on the subject.

The idea of natural selection came to Wallace during an attack of fever whilst he was on a remote Indonesian island in February 1858 (it is unclear whether this epiphany happened on Ternate or neighbouring Gilolo (Halmahera)). As soon as he had sufficient strength, he wrote a detailed essay explaining his theory and sent it together with a covering letter to Darwin, who he knew from earlier correspondence, was deeply interested in the subject of species transmutation (as evolution was then called).

Wallace asked Darwin to pass the essay on to Lyell (who Wallace did not know), if Darwin thought it sufficiently novel and interesting. Darwin had mentioned in an earlier letter to Wallace that Lyell had found his 1855 paper noteworthy and Wallace must have thought that Lyell would be interested to learn about his new theory, since it neatly explained the ‘law’ which Wallace had proposed in that paper.

Darwin, having formulated natural selection years earlier, was horrified when he received Wallace’s essay and immediately wrote an anguished letter to Lyell asking for advice on what he should do. "I never saw a more striking coincidence. If Wallace had my M.S. sketch written out in 1842 he could not have made a better short abstract! ... So all my originality, whatever it may amount to, will be smashed." he exclaimed2. Lyell teamed up with another of Darwin's close friends, Joseph Hooker, and rather than attempting to seek Wallace's permission, they decided instead to present his essay plus two excerpts from Darwin’s writings on the subject (which had never been intended for publication3) to a meeting of the Linnean Society of London on July 1st 1858. The public presentation of Wallace's essay took place a mere 14 days after its arrival in England.

Darwin and Wallace's musings on natural selection were published in the Society’s journal in August that year under the title “On the Tendency of Species to Form Varieties; And On the Perpetuation of Varieties and Species by Natural Means of Selection”. Darwin's contributions were placed before Wallace's essay, thus emphasising his priority to the idea4. Hooker had sent Darwin the proofs to correct and had told him to make any alterations he wanted5, and although he made a large number of changes to the text he had written, he chose not to alter Lyell and Hooker’s arrangement of his and Wallace’s contributions.

Lyell and Hooker stated in their introduction to the Darwin-Wallace paper that “...both authors...[have]...unreservedly placed their papers in our hands...”, but this is patently untrue since Wallace had said nothing about publication in the covering letter he had sent to Darwin6. Wallace later grumbled that his essay “...was printed without my knowledge, and of course without any correction of proofs...”7

As a result of this ethically questionable episode8, Darwin stopped work on his big book on evolution and instead rushed to produce an ‘abstract’ of what he had written so far. This was published fifteen months later in November 1859 as On the Origin of Species: a book which Wallace later magnanimously remarked would “...live as long as the "Principia" of Newton.”9

In spite of the theory’s traumatic birth, Darwin and Wallace developed a genuine admiration and respect for one another. Wallace frequently stressed that Darwin had a stronger claim to the idea of natural selection, and he even named one of his most important books on the subject Darwinism! Wallace spent the rest of his long life explaining, developing and defending natural selection, as well as working on a very wide variety of other (sometimes controversial) subjects. He wrote more than 1000 articles and 22 books, including The Malay Archipelago and The Geographical Distribution of Animals. By the time of his death in 1913, he was one of the world's most famous people.

During Wallace’s lifetime the theory of natural selection was often referred to as the Darwin- Wallace theory and the highest possible honours were bestowed on him for his role as its co- discoverer. These include the Darwin–Wallace and Linnean Gold Medals of the Linnean Society of London; the Copley, Darwin and Royal Medals of the Royal Society (Britain's premier scientific body); and the Order of Merit (awarded by the ruling Monarch as the highest civilian honour of Great Britain). It was only in the 20th Century that Wallace’s star dimmed while Darwin’s burned ever more brightly. 

So why then did this happen?

The reason may be as follows: in the late 19th and early 20th centuries, natural selection as an explanation for evolutionary change became unpopular, with most biologists adopting alternative theories such as neo-Lamarckism, orthogenesis, or the mutation theory. It was only with the modern evolutionary synthesis of the 1930s and ’40s that it became widely accepted that natural selection is indeed the primary driving force of evolution. By then, however, the history of its discovery had largely been forgotten and many wrongly assumed that the idea had first been published in Darwin’s On the Origin of Species. Thanks to the so-called ‘Darwin Industry’ of recent decades, Darwin’s fame has increased exponentially, eclipsing the important contributions of his contemporaries, like Wallace. A more balanced, accurate and detailed history of the discovery of what has been referred to as “...arguably the most momentous idea ever to occur to a human mind” is long overdue.

ENDNOTES

1. Wallace, A. R. 1855. On the law which has regulated the introduction of new species. Annals and Magazine of Natural History, 16 (2nd series): 184-196.

2. Letter from Darwin to Charles Lyell dated 18th [June 1858] (Darwin Correspondence Database, http://www.darwinproject.ac.uk/entry-2285 accessed 20/01/2013).

3. These were an extract from Darwin’s unpublished essay on evolution of 1844, plus the enclosure from a letter dated 5th September 1857, which Darwin had written to the American botanist Asa Gray.

4. Publishing another person’s work without their agreement was as unacceptable then as it is today. Publishing someone’s novel theory without their consent, prefixed by material designed to give priority of the idea to someone else is ethically highly questionable: Wallace should have been consulted first! Fortunately for Darwin and his supporters, Wallace appeared to be pleased by what has been called the ‘delicate arrangement’.

5. In a letter from Joseph Hooker to Darwin dated 13th and 15th July 1858 (Darwin Correspondence Database, http://www.darwinproject.ac.uk/entry-2307 accessed 20/01/2013), Hooker stated " I send the proofs from Linnæan Socy— Make any alterations you please..."

6. In a letter from Darwin to Charles Lyell dated 18th [June 1858] (Darwin Correspondence Database, http://www.darwinproject.ac.uk/entry-2285 accessed 20/01/2013), Darwin, who was referring to Wallace's essay, says "Please return me the M.S. [manuscript] which he does not say he wishes me to publish..." and in a letter from Darwin to Charles Lyell dated [25th June 1858] (Darwin Correspondence Database, http://www.darwinproject.ac.uk/entry-2294 accessed 20/01/2013), Darwin states that "Wallace says nothing about publication..."

7. Letter from Wallace to A. B. Meyer dated 22nd November 1869 cited in Meyer, A. B. 1895. How was Wallace led to the discovery of natural selection? Nature, 52(1348): 415.

8. See Rachels, J. 1986. Darwin's moral lapse. National Forum: 22-24 (pdf available at http://www.jamesrachels.org/DML.pdf)

9. Letter from Wallace to George Silk dated 1st September 1860 (WCP373 in Beccaloni, G. W. (Ed.). 2012. Wallace Letters Online www.nhm.ac.uk/wallacelettersonline [accessed 20/01/2013])

OTHER NOTES

Please cite this article as: Beccaloni, G. W. 2013. Alfred Russel Wallace and Natural Selection: the Real Story. <http://downloads.bbc.co.uk/tv/junglehero/alfred-wallace-biography.pdf>
This article is a slightly modified version of the introduction by George Beccaloni to the following privately published book: Preston, T. (Ed.). 2013. The Letter from Ternate. UK: TimPress. 96 pp.
Categories: Development

Gluent New World #02: SQL-on-Hadoop with Mark Rittman

Tanel Poder - Thu, 2016-04-07 10:02

Update: The video recording of this session is here:

Slides are here.

Other videos are available at Gluent video collection.

It’s time to announce the 2nd episode of the Gluent New World webinar series!

The Gluent New World webinar series is about modern data management: architectural trends in enterprise IT and technical fundamentals behind them.

GNW02: SQL-on-Hadoop : A bit of History, Current State-of-the-Art, and Looking towards the Future

Speaker:

  • This GNW episode is presented by no other than Mark Rittman, the co-founder & CTO of Rittman Mead and an all-around guru of enterprise BI!

Time:

  • Tue, Apr 19, 2016 12:00 PM – 1:15 PM CDT

Abstract:

Hadoop and NoSQL platforms initially focused on Java developers and slow but massively-scalable MapReduce jobs as an alternative to high-end but limited-scale analytics RDBMS engines. Apache Hive opened-up Hadoop to non-programmers by adding a SQL query engine and relational-style metadata layered over raw HDFS storage, and since then open-source initiatives such as Hive Stinger, Cloudera Impala and Apache Drill along with proprietary solutions from closed-source vendors have extended SQL-on-Hadoop’s capabilities into areas such as low-latency ad-hoc queries, ACID-compliant transactions and schema-less data discovery – at massive scale and with compelling economics.

In this session we’ll focus on technical foundations around SQL-on-Hadoop, first reviewing the basic platform Apache Hive provides and then looking in more detail at how ad-hoc querying, ACID-compliant transactions and data discovery engines work along with more specialised underlying storage that each now work best with – and we’ll take a look to the future to see how SQL querying, data integration and analytics are likely to come together in the next five years to make Hadoop the default platform running mixed old-world/new-world analytics workloads.

Register:

 

If you missed the last GNW01: In-Memory Processing for Databases session, here are the video recordings and slides!

See you soon!

 

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Oracle Management Cloud – IT Analytics

Marco Gralike - Thu, 2016-04-07 06:54
In this post I will give you a first glance of a demo environment of…

next generation of compute services

Pat Shuff - Thu, 2016-04-07 02:07
Years ago I was a systems administrator at a couple of universities and struggled making sure that systems were operational and supportable. The one thing that frustrated me more than anything else was how long it took to figure out how something was configured. We had over 100 servers in the data center and on each of these server we had departmental web servers, mail servers, and various other servers to serve the student and faculty users. We standardized on an Apache web server but there were different versions, different configurations, and different additions to each one. This was before virtualization and golden masters became a trendy topic and things were built from scratch. We would put together Linux server with Apache web servers, PHP servers, and MySQL. These later became called LAMP servers. Again, one frustration was the differences between the different versions, how they were compiled, and how they were customized to handle a department. It was bad enough that we had different Linux versions but we had different versions of every other software combination. Debugging became a huge issue because you first had to figure out how things were configure then you had to figure out where the logs were stored and then could start looking at what the issue was.

We have been talking about cloud compute services. In the past blogs we have talked about how to deploying an Oracle Linux 6.4 server onto compute clouds in Amazon, Azure, and Oracle. All three look relatively simple. All three are relatively robust. All three have advantages and disadvantages. In this blog we are going to look at using public domain pre-compiled bundles to deploy our LAMP server. Note that we could download all of these modules into out Linux compute services using a yum install command. We could figure out how to do this or look at web sites like digitalocean.com that go through tutorials on how to do this. It is interesting buy I have to ask why. It took about 15 minutes to provision our Linux server. Doing a yum update takes anywhere from 2-20 minutes based on how old you installation is and how many patches have been released. We then take an additional 10-20 minutes to download all of the other modules, edit the configuration files, open up the security ports, and get everything started. We are 60 minutes into something that should take 10-15 minutes.

Enter stage left, bitnami.com. This company does exactly what we are talking about. They take public domain code and common configurations that go a step beyond your basic compute server and provision these configurations into cloud accounts. In this blog we will look at provisioning a LAMP server. We could have just as easily have configured a wiki server, tomcat server, distance education moodle server, or any other of 100+ public domain configurations that bitmai supports.

The first complexity is linking your cloud accounts into the bitnami service. Unfortunately, the accounts are split into three different accounts; oracle.bitnami.com, aws.bitnami.com, and azure.bitnami.com. The Oracle and Azure account linkages are simple. For Oracle you need to look up the rest endpoint for the cloud service. First, you go to the top right, click the drop down to do account management.

From this you need to look up the rest endpoint from the Oracle Cloud Console by clicking on the Details link from the main cloud portal.

Finally, you enter the identity domain, username, password, and endpoint. With this you have linked the Oracle Compute Cloud Services to Bitnami.

Adding the Azure account is a little simpler. You go to the Account - Subscriptions pull down and add account.

To add the account you download a certificate from the Azure portal as described on the bitnami.com site and import it into the azure.bitnami.com site.

The Amazon linkage is a little more difficult. To start with you have to change your Amazon account according to Bitnami Instructions. You need to add a custom policy that allows bitnami to create new EC2 instances. This is a little difficult to initially understand but once you create the custom policy it becomes easy.

Again, you click on the Account - Cloud Accounts to create a new AWS linkage.

When you click on the create new account you get an option to enter the account name, shared key, and secret key to your AWS account.

I personally am a little uncomfortable providing my secret key to a third party because it opens up access to my data. I understand the need to do this but I prefer using a public/private ssh key to access services and data rather than a vendor provided key and giving that to a third party seems even stranger.

We are going to use AWS as the example for provisioning our LAMP server. To start this we go to http://aws.bitnami.com and click on the Library link at the top right. We could just as easily have selected azure.bitnami.com or oracle.bitnami.com and followed this exact same path. The library list is the same and our search for a LAMP server returns the same image.

Note that we can select the processor core count, disk size, and data center that we will provision into. We don't get much else to choose from but it does the configuration for us and provisions the service in 10-15 minutes. When you click the create key you get an updated screen that shows progress on what is being done to create the VM.

When the creation is complete you get a list of status as well as password access to the application if there were a web interface to the application (in this case apache/php) and an ssh key for authentication as the bitnami user.

If you click on the ppk link at the bottom right you will download the private ssh key that bitnami generates for you. Unfortunately, there is not a way of uploading your own keys but you can change that after the fact for the users that you will log in as.

Once you have the private key, you get the ip address of the service and enter it into putty for Windows and ssh for Linux/Mac. We will be logging in as the user bitnami. We load the ssh key into the SSH - Auth option in the bottom right of the menu system.

When we connect we will initially get a warning but can connect and execute common commands like uname and df to see how the system is configured.

The only differences between the three interfaces is the shapes that you can choose from. The Azure interface looks similar. Azure has fewer options for processor configuration so it is shown as a list rather than a sliding scale that changes the processor options and price.

The oracle.bitnami.com create virtual machine interface does not look much different. The server selection is a set of checkboxes rather than a radio checkbox or a sliding bar. You don't get to check which data center that you get deployed into because this is tied to your account. You can select a different identity domain which will list a different data center but you don't get a choice of data centers as you do with the other services. You are also not shown how much the service will cost through Oracle. The account might be tied to an un-metered service which comes in at $75/OCPU/month or might be tied to a metered service which comes in at $0.10/OCPU/hour. It is difficult to show this from the bitnami provisioning interface so I think that they decided to not show the cost as they do with the other services.

In summary, using a service like bitnami for pre-configured and pre-compiled software packages is the future because it has time and cost advantages. All three cloud providers have marketplace vendors that allow you to purchase commercial packages or deploy commercial configurations where you bring your own license for the software. More on that later. Up next, we will move up the stack and look at what it takes to deploy a the Oracle database on all three of these cloud services.

Links for 2016-04-06 [del.icio.us]

Categories: DBA Blogs

Use external property file in WLST

Darwin IT - Thu, 2016-04-07 01:41
I frequently create a wlst script, that needs properties. Not so exciting, but how to do that in a convenient way, and how to detect in a clean way that properties aren't set?

You could read a property file like described here. The basics are to use in fact Java to create a properties object and a FileInputStream to read it:
#Script to load properties file.

from java.io import File
from java.io import FileInputStream
from java.util import Properties


#Load properties file in java.util.Properties
def loadPropsFil(propsFil):

inStream = FileInputStream(propsFil)
propFil = Properties()
propFil.load(inStream)

return propFil

I think the main disadvantage is that it clutters the script-code and you need to call 'myPorpFil.getProperty(key)' to get the property value.

Following the documentation you can use the commandline option '-loadProperties propertyFilename' to explicitly provide a property file. I found this actually quite clean. Every property in the file becomes automatically available as a variable in your script.

Besides that I found a teriffic blog-post on error handling in wlst. It states that with ' except NameError, e:' you can handle the reference to a variable that is not declared earlier.

I combined these two sources to come up with a script template that alows me to provide property files for different target environments as a commandline option, while detecting if properties are provided. So let's assume you create a porpererty file named for instance 'localhost.properties' like:
#############################################################################
# Properties voor localhost Integrated Weblogic
#
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.0, 2016-04-06
#
#############################################################################
#
# Properties voor localhost
adminUrl=localhost:7101
adminUser=weblogic
adminPwd=welcome1
clustername=LocalCluster
# Generieke properties voor het creeeren van JMS componenten
#jmsFileStoresBaseDir=/app/oracle/config/cluster_shared/filestore/
jmsFileStoresBaseDir=c:/Data/JDeveloper/SOA/filestore
#Filestore 01
...

Then you can use that with the following script, named for instance 'createJMSServersWithFileStoreV2.py':
#############################################################################
# Create FileStores and JMS Servers
#
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.0, 2016-04-06
#
#############################################################################
# Modify these values as necessary
import sys, traceback
scriptName = 'createJMSServersWithFileStoreV2.py'
#
#
def usage():
print 'Call script as: '
print 'Windows: wlst.cmd'+scriptName+' -loadProperties localhost.properties'
print 'Linux: wlst.sh'+scriptName+' -loadProperties environment.properties'
print 'Property file should contain the following properties: '
print "adminUrl='localhost:7101'"
print "adminUser='weblogic'"
print "adminPwd='welcome1'"

def main():
try:
#Connect to administration server
print '\nConnect to AdminServer via '+adminUrl+' with user '+adminUser
connect(adminUser, adminPwd, adminUrl)
...
except NameError, e:
print 'Apparently properties not set.'
print "Please check the property: ", sys.exc_info()[0], sys.exc_info()[1]
usage()
except:
apply(traceback.print_exception, sys.exc_info())
stopEdit('y')
exit(exitcode=1)

#call main()
main()
exit()

You can call it like 'wlst createJMSServersWithFileStoreV2.py -loadProperties localhost.properties'. If you don't provide a property file you'll get:
e:\wls>wlst createJMSServersWithFileStoreV2.py

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

Apparently properties not set.
Please check the properties: exceptions.NameError adminUrl
Call script as:
Windows: wlst.cmdcreateJMSServersWithFileStoreV2.py -loadProperties localhost.properties
Linux: wlst.shcreateJMSServersWithFileStoreV2.py -loadProperties environment.properties
Property file should contain the following properties:
adminUrl='localhost:7101'
adminUser='weblogic'
adminPwd='welcome1'


Exiting WebLogic Scripting Tool.


e:\wls>

Pretty clean. You could even use the 'except NameError, e:' construct to conditionally execute code when properties are set by ignoring/handling the situation when particular properties are intentionally not provided.

Tomcat Runtime added to Web Console of IBM Bluemix

Pas Apicella - Thu, 2016-04-07 01:05
I almost always use the tomcat buildpack within IBM Bluemix for my Java based applications. By default IBM bluemix will use the IBM Liberty buildpack for java apps unless you specify otherwise. The buildpacks on Bluemix can be viewed using "cf buildpacks" and the tomcat buildpack is referred to as "java_buildpack"

So to use the tomcat buildpack in a manifest.yml you would target it as follows

applications:
- name: pas-javaapp
  memory: 512M
  instances: 1
  host: pas-javaapp
  domain: mybluemix.net
  path: ./passapp.war
  buildpack: java_buildpack

Now the Web Console catalog for "Runtimes" shows Tomcat for those creating an application from the Console itself. This was done for those who wish to use Tomcat on Bluemix can cleary see it's an option as per the screen shot below and don't have to start with the Liberty Buildpack if they don't wish to do so.


Categories: Fusion Middleware

Partner Webcast – Oracle WebLogic Server 12.2.1 Multitenancy and Continuous Availability

As part of the latest major Oracle Fusion Middleware release, Oracle announced the largest release of Oracle WebLogic Server in a decade. Oracle WebLogic Server 12c, the world’s first cloud-native,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Management Cloud – Application Performance Monitoring

Marco Gralike - Wed, 2016-04-06 16:42
A while ago I created a first post about the Oracle Management Cloud ( #OMC…

Log Buffer #468: A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2016-04-06 15:38

This Log Buffer Edition rounds up Oracle, SQL Server, and MySQL blog posts of the week.

Oracle:

When using strings such as “FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=9,10? within the scheduler, sometimes its not readily apparent how this will translate to actual dates and times of the day that the scheduled activity will run. To help you understand, a nice little utility is to use EVALUATE_CALENDAR_STRING”.

Most developers have struggled with wires in SOA composites. You may find yourself in a situation where a wire has been deleted. Some missing wires are restored by JDeveloper. Other missing wires have to be added manually, by simply re-connecting the involved adapters and components. Simple.

In-Memory Parallel Query and how it works in 12c.

Oracle recently launched a new family of offerings designed to enable organizations to easily move to the cloud and remove some of the biggest obstacles to cloud adoption. These first-of-a-kind services provide CIOs with new choices in where they deploy their enterprise software and a natural path to easily move business critical applications from on premises to the cloud.

Two Oracle Server X6-2 systems, using the Intel Xeon E5-2699 v4 processor, produced a world record x86 two-chip single application server SPECjEnterprise2010 benchmark result of 27,509.59 SPECjEnterprise2010 EjOPS. One Oracle Server X6-2 system ran the application tier and the second Oracle Server X6-2 system ran the database tier.

SQL Server:

To be able to make full use of the system catalog to find out more about a database, you need to be familiar with the metadata functions.

Powershell To Get Active Directory Users And Groups into SQL!

A code review is a serious business; an essential part of development. Whoever signs off on a code review agrees, essentially, that they would be able to support it in the future, should the original author of the code be unavailable to do it.

Change SQL Server Service Accounts with Powershell

Learn how to validate integer, string, file path, etc. input parameters in PowerShell as well as see how to test for invalid parameters.

MySQL:

The MySQL Utilities has announced a new beta release of MySQL Utilities. This release includes a number of improvements for usability, stability, and a few enhancements.

In this webinar, we will discuss the practical aspects of migrating a database setup based on traditional asynchronous replication to multi-master Galera Cluster.

Docker has gained widespread popularity in recent years as a lightweight alternative to virtualization. It is ideal for building virtual development and testing environments. The solution is flexible and seamlessly integrates with popular CI tools.

How ProxySQL adds Failover and Query Control to your MySQL Replication Setup

Read-write split routing in MaxScale

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator