Skip navigation.

Feed aggregator

A Test Drive With Azure

Pythian Group - Thu, 2016-05-19 11:55

While recently reading the latest Microsoft Azure updates, I found that we could try some VM’s from Azure Marketplace for free, even without any Azure subscription account. The feature is called “Azure test drive”. I found it interesting and decided to give a try, and see what it could offer. Here I am going to share my “driving” experience with the service.

 

I opened a browser where I was not logged to Azure, and went to the Test drives on Microsoft Azure webpage. There you can see a list of different VMs including DataStax, NetApp DataVault, and others. There are dozens of services listed there. I picked up a “SoftNAS Cloud NAS”, clicked and was redirected to the product page. On this page I had two options to create a VM. The main one was the usual “Create Virtual Machine” button, and the second one was located a bit lower and stated “Or, take a free test drive first”. I correctly assumed that was exactly what I needed (not a rocket science), and clicked it. I was was offered either to login or create a new “Orbitera” account for Azure Marketplace test drives.

 

The new service did not required an Azure account, but still asked to create one on another service. I didn’t have that account yet, and opted to create a new one. On the account creation page I was asked standard questions, filled out forms, and at the bottom confirmed that I was not a robot and pushed “sign up”. I got an email with a validation link just thirty seconds after that.
After validating my account I was able to sign up, and got to the test drive the page. I then received a button saying “Launch Test Drive”, an introduction video and couple of guides in pdf to download. One of the guides was a SoftNAS Cloud NAS Demo which helped get most out of the test drive, and another one was about the “SoftNAS Cloud NAS” itself. I pushed the button and got a counter showing how many many minutes were remaining to complete the deployment of my test VM.

 

It took less than 7 minutes to get link to the my Softnas interface and credentials. By the way, you don’t need to sit and wait when it is crating, you are going to get an email when the VM is ready to use and when you time is up. The test drive VM lives only one hour, and you have a counter on the test drive page with time left for you exercise. I got an hour to test SoftNAS. I later tried another test drive for SUSE HPC and got only 30 minutes available for it. I think the test time will depend on the service you want to try out, and how much time you may need to go through all the options.
Interestingly, I got a warning connecting to my newly created SoftNAS. It looked like certificate for the web interface was not good. My Firefox browser complained “The owner of softnas33027publicipdns.eastus.cloudapp.azure.com has configured their website improperly.” Of course you can wave that warning and add the certificate to exceptions but, I think,for SoftNAS it will be better to fix it.

 

So, I played with the test VM, shared couple of volumes, and was able to mount them on another VM and on my Mac. It worked pretty well and allowed  me to make myself friendly with the interface and options. When the my hour had finished, the SoftNAS went down and all my mounts became unavailable. If you need more time you can just fire another trial and use one more hour. Yes, it would be a brand new VM, and no configuration would be saved if you had one, but, it will allow you to explore features you weren’t able to try before time was gone.

 

I liked the service. It provides opportunity to test and play with new products in Marketplace and decide either it suits you or not. And you can just show to somebody how to work with SoftNAS, how to share and mount a new NFS or do other things. I hope the test drive will be growing and we see more and more new brands among available products.

Categories: DBA Blogs

ORA-56841: Master Diskmon cannot connect to a CELL

Amardeep Sidhu - Thu, 2016-05-19 10:45

Faced this error while querying v$asm_disk after adding new storage cell IPs to cellip.ora on DB nodes of an existing cluster on Exadata. Query ends with ORA-03113 end-of-file on communication channel and ORA-56841 is reported in $ORA_CRS_HOME/log/<hostname>/diskmon/diskmon.log. Reason in my case was that the new cell was using different subnet for IB. It was pingable from the db nodes but querying v$asm_disk wasn’t working. Changing the subnet for IB on new cell to the one on existing cells fixed the issue.

 

 

Categories: BI & Warehousing

Oracle JET Master-Detail with ADF BC REST

Andrejus Baranovski - Thu, 2016-05-19 10:37
One of the most typical use cases in enterprise applications - Master-Detail relationship implementation. I have decided to implement it in JET and to share this practical implementation with you. Hopefully it will be useful, when you will be learning and building JET applications.

Sample application - JETCRUDApp_v8.zip implements a table with row selection in JET (you must run ADF BC REST application in JDEV 12.2.1 and JET in NetBeans 8). On row selection, Job ID is retrieved from selected row and based on this - Job data is fetched. Minimum and Maximum salary values are fetched from Job data and displayed in the chart, along with selected employee salary. I'm executing separate REST call for each new selection in the master and detail data change, see how fast data is being changed in the chart:


Detail data displayed in the chart is fetched based on Job ID selected in the master table:


Each time when new row is selected in the table, chart is changed (REST call to ADF BC is executed in the background, using Job ID for selected employee as a key):


Great thing about JET - it is responsive out of the box. On narrow screen, UI components are re-arranged into single column layout. Editable form is displayed below table:


Chart is displayed below editable form:


Let's take a look, how detail data displayed in the chart is fetched. First of all chart UI component is defined in HTML, it points to series value observable variable:


This observable variable is defined as array in JavaScript. Observable variable allows to push value changes to UI automatically, without additional intervention:


I have defined data structure for Job response, this includes JobId key, Minimum/Maximum values. REST URL will be assigned later, in the table selection listener. This URL will change for each selected row, to retrieve different REST resource. We leave model definition without REST URL here:


Table selection listener creates new model and executes fetch operation for the job. REST URL with Job key is constructed to fetch required detail data. In the success callback returned data is accessed and pushed into observable collection, which is displayed in the chart:

ADVISOR WEBCAST: WebCenter Enterprise Capture General Functionality and Information - Jun 22nd

WebCenter Team - Thu, 2016-05-19 10:05

Reposting from the WebCenter Content Proactive Support blog. This blogpost author was Alan Boucher. 

ADVISOR WEBCAST: WebCenter Enterprise Capture General Functionality and Information

Schedule:

  • Wednesday, June 22, 2016 08:00 AM (US Pacific Time)
  • Wednesday, June 22, 2016 11:00 AM (US Eastern Time)
  • Wednesday, June 22, 2016 05:00 PM (Central European Time)
  • Wednesday, June 22, 2016 08:30 PM (India Standard Time)

Abstract:

This one hour session is recommended for technical and functional users who are using WebCenter Capture and those that may be planning on using it but would like more information.This is a presentation on the base functionality of the Oracle WebCenter Enterprise Capture product.

Topics Include:

    • What is Enterprise Capture and why is it useful?
    • What tools come with Capture / what functionality do those specific tools provide?
      • Client
      • Import Processor
      • Document Conversion Processor
      • Recognition Processor
      • Commit Processor

Duration: 1 hr

Current Schedule and Archived Downloads can be found in Note 740966.1.

WebEx Conference Details

Topic: WebCenter Enterprise Capture General Functionality and Information
Event Number: 596 238 193
Event Passcode: 909090

Register for this Advisor Webcast: https://oracleaw.webex.com/oracleaw/onstage/g.php?d=596238193&t=a

Once the host approves your request, you will receive a confirmation email with instructions for joining the meeting.

InterCall Audio Instructions

A list of Toll-Free Numbers can be found below.

  • Participant US/Canada Dial-in #: 1866 230 1938    
  • International Toll-Free Numbers
  • Alternate International Dial-In #: +44 (0) 1452 562665
  • Conference ID: 11837830

VOICESTREAMING AVAILABLE

Oracle BITAND Function with Examples

Complete IT Professional - Thu, 2016-05-19 05:00
In this article, I’ll explain how to use the Oracle BITAND function and show you some examples. Purpose of the Oracle BITAND Function The Oracle BITAND function is used to perform what’s called a standard bitwise AND operation. It’s used to compare two numbers and outputs a third number. I’ll explain what a bitwise AND […]
Categories: Development

Database in Amazon EC2

Pat Shuff - Thu, 2016-05-19 01:07
Today we are going to look at what it takes to get a 12c database instance up and running in Amazon EC2. Note that this is different than our previous posts on getting Standard Edition running on Amazon and running Enterprise Edition running on Amazon RDS. We are going to take the traditional approach as if we were installing the database on a virtual image like VMWare, HyperV, or OracleVM. The approach is to take IaaS and layer the database upon it.

There are a few options on how to create the database instance. We can load everything from scratch, we can load a pre-defined AMI, we can create a golden image and clone it, we can do a physical to virtual then import the instance into the cloud, or we can create a Chef recipe and automate everything. In this blog we are going to skip the load everything because it is very cumbersome and time consuming. You basically would have to load the operating system, patch the operating system, create users and groups, download the binaries, unpack the binaries, manage the firewall, and manage the cloud port access rights. Each of these steps takes 5-30 minutes so the total time to get the install done would be 2-3 hours. Note that this is much better than purchasing hardware, putting it in a data center, loading the operating system and following all the same steps. We are also going to skip the golden image and cloning option since this is basically loading everything from scratch then cloning an instance. We will look at cloning a physical and importing into the cloud in a later blog. In this blog we are going to look at selecting a pre-defined AMI and loading it.

One of the benefits of the Marketplace model is that you get a pre-defined and pre-configured installation of a software package. Oracle provides the bundle for Amazon in the form of an AMI. For these instances you need to own your own perpetual license. It is important to understand the licensing implications and how Oracle defines licensing for AWS. Authorized Cloud Environment instances with 4 or fewer virtual cores are counted as 1 socket, which is considered equivalent to a processor license. For Authorized Cloud Environment instances with more than 4 virtual cores, every 4 virtual cores used (rounded up to the closest multiple of 4) equate to a licensing requirement of 1 socket. This is true for the Standard Edition license. For the Enterprise Edition license the assumption is that the cloud processor is an x86 chip set to a processor license is required for every 2 virtual cores. All of the other software like partitioning, diagnostics, tuning, compression, advanced security, etc also need to be licensed with the same metric.

If we look at the options for AMIs available we go to the console, click on EC2, and click on Launch Instance.

When we search for Oracle we get a wide variety of products like Linux, SOA, and database. If we search for Oracle database we refine the search a little more but get other supplementary products that are not the database but products that relate to the database. If we search for Oracle database 12c we get six return values.

We find two AMIs that look the same but the key difference is that one limits you to 16 cores and the other does not. We can select either one for our tests. If we search the Community AMIs we get back a variety of 11g and 10g installation options but no 12c options. (Note that the first screen shot is the Standard Edition description, it should be the Enterprise Edition since two are listed).

We are going to use the Commercial Marketplace and select the first 12c database instance. This takes us to a screen that lets us select the processing shape. Note that the smaller instances are not allowed because you need a little memory and a single core does not run the database very well. This is one of the advantages over selecting an operating system ourselves and finding out that we selected too few cores or not enough memory. Our selections are broken down into general purpose, compute optimized, or storage optimized. The key difference is how many cores, how much memory, and dedicated vs generic IOPs to the disk.

We could select an m3.xlarge or c3.xlarge and the only difference would be the amount of memory allocated. Network appears to be a little different with the c3.xlarge having less network throughput. We are going to select the m3.xlarge. Looking at pricing we should be charged $0.351/hour for the Ec2 instance, $0.125 per GB-month provisioned or $5/month for our 40 GB of disk, and $0.065 per provisioned IOP-month or $32.50/month. Our total cost of running this x3.xlarge instance will be $395.52/month or $13.18/day. We can compare this to a similarly configured Amazon RDS at $274.29/month. We need to take into account that we will need to purchase two processor licenses of the Enterprise Edition license at $47,500 per processor license. The cost of this license over four years will be $95,000 for the initial license plus 22% or $20,900 per year for support. Our four year cost of ownership will be $178,600. Amortizing this over four years brings this cost to $3,720/month. Our all in cost for the basic Enterprise Edition will cost us $4,116.35/month. If we want to compare this to the DBaaS cost that we covered earlier we also need to add the cost of the Transparent Data Encryption so that we can encrypt data in the cloud. This module is included in the Advanced Security Module which is priced at $15,000 per processor license. The four year cost of ownership for this package is $56,400 bringing the additional cost to $1,175/month. We will be spending $5,291.35 for this service with Amazon.

If we want to compare this with PaaS we have the option or purchasing the same instance at $1,500/OCPU/month or $3,000/month or $2.52/OCPUhour for the Enterprise Edition on a Virtual Image. We only need two OCPUs because this provides us with two threads per virtual core where Amazon provides you with one thread per core. We are really looking for thread count and not virtual core count. Four virtual processors in Amazon is equivalent to two OCPUs so our cost for a virtual image will be $1.5K/OCPU * 2 OCPUs. If we go with the Database as a Service we are looking at $3,000/OCPU/month or $6,000/month or $5.04/OCPU/hour for the Enterprise Edition as a service. What we need to rationalize is the extra $708/month for the PaaS service. Do we get enough benefit from having this as a service or do we spend more time and energy up front to pay less each month?

If we are going to compare the High Performance edition against the Amazon EC2 edition we have to add in the options that we get with High Performance. There are 13 features that need to be licensed to make the comparison the same. Each of these options cost anywhere from $11,500 per processor to $23,000 per processor. We saw earlier that each option will add $1,175/month so adding the three most popular options, partitioning, diagnostics, and tuning, will cost $3,525/month more. The High Performance edition will cost us $2,000/OCPU/month or $4K/month for the virtual image and $4,000/OCPU/month or $8K/month. Again we get ten more options bundled on with the High Performance option at $8K/month compared to $8,816.35 with the AWS EC2 option. We also get all of the benefits of PaaS vs IaaS for this feature set.

Once we select our AMI, instance type, we have to configure the options. We can request a spot instance but this is highly discouraged for a database. If you get terminated because your instance is needed you could easily loose data unless you have DataGuard configured and setup for synchronous data commit. We can provision this instance into a virtual private network which is different from the way it is done in the Oracle cloud. In the Oracle cloud you provision the service then configure the virtual instance. In Amazon EC2 it is done at the same time. You do have the option of provisioning the instance into one of five instance zones but all are located in US East. You can define the administration access roles with the IAM role option. You have to define these prior to provisioning the database. You can also define operating of this instance from the console. You can stop or terminate the instance when it is shut down as well as prohibit someone from terminating the instance unless they have rights to do so. You can enable CloudWatch (at an additional charge of $7.50/month) to monitor this service and restart it if it fails. We can also add elastic block attachment so that our data can migrate from one instance to another at an additional cost.

We now have to consider the reserved IOPs for our instance when we look at the storage. By default we get 8 GB for the operating system, 50 GB for the data area with 500 provisioned IOPS, and 8 GB for log space. The cost of the reserved IOPS adds $38.75/month. If we were looking at every penny we would also have to look at outbound traffic from the database. If we read all of our 50 GB back it would increase the price of the service by a little over $3/month. Given that this is relatively insignificant we can ignore it but it was worthy of looking at with the simple monthly calculator.

Our next screen is the tags which we will not use but could be used to search if we have a large number of instances. The screen after that defines the open ports for this service. We want to add other ports like 1521 for the database, and 443 and 80 for application express. Port 1158 and 22 were predefined for us to allow for enterprise manager and ssh access.

At this point we are ready to launch our instance. We will have 50 GB of table space available and the database will be provisioned and ready for us upon completion.

Some things to note in the provisioning of this instance. We were never asked for an OID for the database. We were never asked for a password associated with the sys, system, or sysdba user account. We were never asked for a password to access the operating system instance. When we click on launch we are asked for an ssh key to access the instance once it is created.

When you launch the instance you see a splash screen then a detail screen as the instance is created. You also get an email confirming that you are provisioning an instance from the marketplace. At this point I notice that I provisioned Standard Edition and not Enterprise Edition. The experience is the same and nothing should change up to this point so we can continue with the SE AMI.

Once the instance is created we can look at the instance information and attach to the service via putty or ssh. The ip address that we were assigned was 54.242.14.146. We load the private key and ip address into putty and connect. We first failed with oracle then got an error message with root. Once we connect with ec2-user we are asked if we want to create a database, enter the OID, and enter the sys, system, and dbsnmp passwords.

The database creation takes a while (15-30 minutes according to the create script) and you get a percent complete notification as it progresses. At this point we have a database provisioned, the network configured, security through ssh keys to access the instance, and should be ready to connect to our database with sql developer. In our example it took over an hour to create the database after taking only five minutes to provision the operating system instance. The process stalled at 50% complete and sat there for a very long time. I also had to copy the /home/ec2-user/.ssh/authorized_keys into the /home/oracle/.ssh directory (after I created it) to allow the oracle user to login. The ec2-user account has rights to execute as root so you can create this directory, copy the file, and change ownership of the .ssh directory and contents to oracle. After you do this you can login as oracle and manage the database who owns the processes and directories in the /u01 directory.

It is important to note that the database in EC2 provides more features and functions than the Amazon RDS version of the database. Yes, you get automated backup with RDS but it is basically a snapshot to another storage cloud instance. With the EC2 instance you get features like spatial, multi-tenant, and sys access to the database. You also get the option to use RMAN for backups to directories that you can read offsite. You can setup DataGuard and Enterprise Manager. The EC2 feature set is significantly more robust but requires more work to setup and operate.

In summary, we looked at what it takes to provision a database onto Amazon EC2 using a pre-defined AMI. We also looked at the cost of doing this and found out that we can minimally do this at roughly $5.3K/month. When we add features that are typically desired this price grows to $8.8K/month. We first compared this to running DBaaS in a virtual instance in the Oracle Public Cloud at $6K/month (with a $3K/month smaller footprint available) and DBaaS as a service at $8K/month (with a $4K/month smaller footprint available). We talked about the optional packs and packages that are added with the High Performance option and talked about the benefits of PaaS vs IaaS. We did not get into patching, backups, and restart features provided with PaaS but did touch on them briefly when we went through our instance launch. We also compared this to the Amazon RDS instance in features and functions at about a hundred of dollars per month cheaper. The bulk of the cost is the database license and not the compute or storage configuration. It is important to note that the cost of the database perpetual license is still being paid for if you are running the service or not. With PaaS you do get the option of keeping the data active in cloud storage attached to a compute engine that is running but you can turn off the database license on an hourly or monthly basis to save money if this fits your usage model of a database service.

Surveillance data in ordinary law enforcement

DBMS2 - Wed, 2016-05-18 21:45

One of the most important issues in privacy and surveillance is also one of the least-discussed — the use of new surveillance technologies in ordinary law enforcement. Reasons for this neglect surely include:

  • Governments, including in the US, lie about this subject a lot. Indeed, most of the reporting we do have is exposure of the lies.
  • There’s no obvious technology industry ox being gored. What I wrote in another post about Apple, Microsoft et al. upholding their customers’ rights doesn’t have a close analogue here.

One major thread in the United States is:

  • The NSA (National Security Agency) collects information on US citizens. It turns a bunch of this over to the “Special Operations Division” (SOD) of the Drug Enforcement Administration (NSA).
  • The SOD has also long collected its own clandestine intelligence.
  • The SOD turns over information to the DEA, FBI (Federal Bureau of Investigation), IRS (Internal Revenue Service) and perhaps also other law enforcement agencies.
  • The SOD mandates that the recipient agencies lie about the source of the information, even in trials and court filings. This is called “parallel construction”, in that the nature of the lie is to create another supposed source for the original information, which has the dual virtues of:
    • Making it look like the information was obtained by allowable means.
    • Protecting confidentiality of the information’s true source.
  • There is a new initiative to allow the NSA to share more surveillance information on US citizens with other agencies openly, thus reducing the “need” to lie, and hopefully gaining efficiency/effectiveness in information-sharing as well.

Similarly, StingRay devices that intercept cell phone calls (and thus potentially degrade service) are used by local police departments, who then engage in “parallel construction” for several reasons, one simply being an NDA with manufacturer Harris Corporation.

Links about these and other surveillance practices are below.

At this point we should note the distinction between intelligence/leads and admissible evidence.

  • Intelligence (or leads) is any information that can be used to point law enforcement or security forces at people who either plan to do or already have done unlawful and/or very harmful things.
  • Admissible evidence is information that can legally be used to convict people of crimes or otherwise bring down penalties and sanctions upon then.

I won’t get into the minutiae of warrants, subpoenas, probable cause and all that, but let’s just say:

  • In theory there’s a semi-bright line between intelligence and admissible evidence; i.e., there’s some blurring, but in most cases the line can be pretty easily seen.
  • In practice there’s a lot of blurring. Parallel construction is only one of the ways the semi-bright line gets scuffed over.
  • Even so, this distinction has great value. The number of people who have been badly harmed in the US by inappropriate use of inadmissible intelligence isn’t very high …
  • … yet.

“Yet” is the key word. My core message in this post is that — despite the lack of catastrophe to date — the blurring of the intelligence/evidence line needs to be greatly reversed:

Going forward, the line between intelligence and admissible evidence needs to be established and maintained in a super-bright state.

As you may recall, I’ve said that for years, in a variety of different phrasings. Still, it’s a big enough deal that I feel I should pound the table about it from time to time — especially now, when public policy in other aspects of surveillance is going pretty well, but this area is headed for disaster. My argument for this view can be summarized in two bullet points:

  • Massive surveillance is inevitable.
  • Unless the uses of the resulting information are VERY limited, freedoms will be chilled into oblivion.

I recapitulate the chilling effects argument frequently, so for the rest of this post let’s focus on the first bullet point. Massive surveillance will be a fact of life for reasons including:

  • As a practical political matter, domestic surveillance will be used at least for anti-terrorism. If you doubt that — please just consider the number of people who support Donald Trump.
  • Actually, the constituency for anti-terrorism surveillance is much more than just the paranoid idiots. Indeed — and notwithstanding the great excesses of anti-terrorism propaganda around the world — that constituency includes me. :) My reasons start:
    • In a country of well over 300 million people, there probably are a few who are both crazy and smart enough to launch Really Bad Attacks. Stopping them before they act is a Very Good Idea.
    • The alternative is security — or more likely security theater — measures that are intrusive across the board. I like unfettered freedom of movement, for example. But I can barely stand the TSA (Transportation Security Administration).
  • Commercial “surveillance” is intense. And it’s essential to the internet economy.

And so I return to the point I’ve been making for years: Surveillance WILL happen. So the use of surveillance information needs to be tightly limited.

Related links:

  • Reason’s recent rant about parallel construction contains a huge number of links. Ditto a calmer Rodney Balko blog for the Washington Post. (March, 2016).
  • Reuters gave details of the SOD’s thou-shalt-lie mandates in August, 2013.
  • If you have a clearance and work in the civilian sector, you may be subject to 24/7 surveillance, aka continuous evaluation, for fear that you might be the next Ed Snowden. (March, 2016)
  • License plate scanning databases are already a big deal in law enforcement. (October, 2015)
  • StingRay-type devices are powerful, and have been for quite a few years. They’re really powerful. Procedures related to StingRay surveillance are in flux. (2015)
  • Chilling effects are real. (April, 2016)
  • At least one federal court has decided that tracking URLs visited without a warrant is an illegal wiretap. Other courts think your URL visits, shopping history, etc. are fair game. (November, 2015)
  • Pakistan in effect bugged citizens’ cell phones to track their movements and force polio vaccines on them. (November, 2015)
  • This is not totally on-topic, but it does support worries about what the government can do with surveillance-based analytics — law enforcement can wildly exaggerate the significance of its “scientific” evidence, and gain bogus convictions as a result. (2015-2016).
  • The Electronic Frontier Foundation offers a dated but fact-filled overview of NSA domestic spying (2012-2013).
Categories: Other

Governments vs. tech companies — it’s complicated

DBMS2 - Wed, 2016-05-18 21:42

Numerous tussles fit the template:

  • A government wants access to data contained in one or more devices (mobile/personal or server as the case may be).
  • The computer’s manufacturer or operator doesn’t want to provide it, for reasons including:
    • That’s what customers prefer.
    • That’s what other governments require.
    • Being pro-liberty is the right and moral choice. (Yes, right and wrong do sometimes actually come into play. :) )

As a general rule, what’s best for any kind of company is — pricing and so on aside — whatever is best or most pleasing for their customers or users. This would suggest that it is in tech companies’ best interest to favor privacy, but there are two important quasi-exceptions:

  • Recommendation/personalization. E-commerce and related businesses rely heavily on customer analysis and tracking.
  • When the customer is the surveiller. Governments pay well for technology that is used to watch over their citizens.

I used the “quasi-” prefix because screwing the public is risky, especially in the long term.

Something that is not even a quasi-exception to the tech industry’s actual or potential pro-privacy bias is governmental mandates to let their users be watched. In many cases, governments compel privacy violations, by threat of severe commercial or criminal penalties. Tech companies should and often do resist these mandates as vigorously as they can, in the courts and/or via lobbying as the case may be. Yes, companies have to comply with the law. However, it’s against their interests for the law to compel privacy violations, because those make their products and services less appealing.

The most visible example of all this right now is the FBI/Apple kerfuffle. To borrow a phrase — it’s complicated. Among other aspects:

  • Syed Rizwan Farook, one of the San Bernardino terrorist murderers, had 3 cell phones. He carefully destroyed his 2 personal phones before his attack, but didn’t bother with his iPhone from work.
  • Notwithstanding this clue that the surviving phone contained nothing of interest, the FBI wanted to unlock it. It needed technical help to do so.
  • The FBI got a court order commanding Apple’s help. Apple refused and appealed the order.
  • The FBI eventually hired a third party to unlock Farook’s phone, for a price that was undisclosed but >$1.3 million.
  • Nothing of interest was found on the phone.
  • Stories popped up of the FBI asking for Apple’s help unlocking numerous other iPhones. The courts backed Apple or not depending on how they interpreted the All Writs Act. The All Writs Act was passed in the first-ever session of the US Congress, in 1789, and can reasonably be assumed to reflect all the knowledge that the Founders possessed about mobile telephony.
  • It’s widely assumed that the NSA could have unlocked the phones for the FBI — but it didn’t.

Russell Brandom of The Verge collected links explaining most of the points above.

With that as illustration, let’s go to some vendor examples:

All of these cases seem consistent with my comments about vendors’ privacy interests above.

Bottom line: The technology industry is correct to resist government anti-privacy mandates by all means possible.

Categories: Other

Privacy and surveillance require our attention

DBMS2 - Wed, 2016-05-18 21:41

This year, privacy and surveillance issues have been all over the news. The most important, in my opinion, deal with the tension among:

  • Personal privacy.
  • Anti-terrorism.
  • General law enforcement.

More precisely, I’d say that those are the most important in Western democracies. The biggest deal worldwide may be China’s movement towards an ever-more-Orwellian surveillance state.

The main examples on my mind — each covered in a companion post — are:

Legislators’ thinking about these issues, at least in the US, seems to be confused but relatively nonpartisan. Support for these assertions includes:

I do think we are in for a spate of law- and rule-making, especially in the US. Bounds on the possible outcomes likely include:

  • Governments will retrain broad powers for anti-terrorism If there was any remaining doubt, the ISIS/ISIL/Daesh-inspired threats guarantees that surveillance will be intense.
  • Little will happen in the US to clip the wings of internet personalization/recommendation. To a lesser extent, that’s probably true in other developed countries as well.
  • Non-English-speaking countries will maintain data sovereignty safeguards, both out of genuine fear of (especially) US snooping and as a pretext to support their local internet/cloud service providers.

As always, I think that the eventual success or failure of surveillance regulation will depend greatly on the extent to which it accounts for chilling effects. The gravity of surveillance’s longer-term dangers is hard to overstate, yet  they still seem broadly overlooked. So please allow me to reiterate what I wrote in 2013 — surveillance + analytics can lead to very chilling effects.

When government — or an organization such as your employer, your insurer, etc. — watches you closely, it can be dangerous to deviate from the norm. Even the slightest non-conformity could have serious consequences.

And that would be a horrific outcome.

So I stand by my privacy policy observations and prescriptions from the same year:

… direct controls on surveillance … are very weak; government has access to all kinds of information. … And they’re going to stay weak. … Consequently, the indirect controls on surveillance need to be very strong, for they are what stands between us and a grim authoritarian future. In particular:

  • Governmental use of private information needs to be carefully circumscribed, including in most aspects of law enforcement.
  • Business discrimination based on private information needs in most cases to be proscribed as well.

The politics of all this is hard to predict. But I’ll note that in the US:

  • There’s an emerging consensus that the criminal justice system is seriously flawed, on the side of harshness. However …
  • … criminal justice reform is typically very slow.
  • The libertarian movement (Ron Paul, Rand Paul, aspects of the Tea Party folks, etc.) seems to have lost steam.
  • The courts cannot be relied upon to be consistent. Questions about Supreme Court appointments even aside, Fourth Amendment jurisprudence in the US has long been confusing and confused.
  • Few legislators understand technology.

Realistically, then, the main plausible path to a good outcome is that the technology industry successfully pushes for one. That’s why I keep writing about this subject in what is otherwise a pretty pure technology blog.

Bottom line: The technology industry needs to drive privacy/ surveillance public policy in directions that protect individual liberties. If it doesn’t, we’re all screwed.

Categories: Other

Remove Linux package with apt

Jeff Moss - Wed, 2016-05-18 15:40

Oracle Security Training In York

Pete Finnigan - Wed, 2016-05-18 15:35

We ran a five day Oracle Security training event in York, England from September 21st to September 25th at the Holiday Inn hotel. This proved to be very successful and good fun. The event included back to back teaching by....[Read More]

Posted by Pete On 22/10/15 At 08:49 PM

Categories: Security Blogs

New Presentation - Building Practical Oracle Audit Trails

Pete Finnigan - Wed, 2016-05-18 15:35

I wrote a presentation on designing and building practical audit trails back in 2012 and presented it once and then never again. By chance I did not post the pdf's of these slides at that time. I did though some....[Read More]

Posted by Pete On 01/10/15 At 05:16 PM

Categories: Security Blogs

Protect Your APEX Application PL/SQL Source Code

Pete Finnigan - Wed, 2016-05-18 15:35

Oracle Application Express is a great rapid application development tool where you can write your applications functionality in PL/SQL and create the interface easily in the APEX UI using all of the tools available to create forms and reports and....[Read More]

Posted by Pete On 21/07/15 At 04:27 PM

Categories: Security Blogs

Vanderbilt University Promotes Oracle HCM, ERP, and EPM Clouds

Linda Fishman Hoyle - Wed, 2016-05-18 15:14

It’s great when we get to showcase our customers. It is amazing when great customers showcase us!

And that’s exactly what happens in this new video from Vanderbilt University.

The video was created by Vanderbilt, not Oracle, to help launch the university’s SkyVU initiative. SkyVU is an Oracle ERP, HCM, and EPM cloud-based solution that replaces approximately 15 current e-business services and provides a modern and synchronized environment “to allow university faculty and staff to spend less time on paperwork and more time on efforts that contribute to the university’s mission.” The video, a launch event, and website (under construction) are part of the rollout to key stakeholders and users.

The SkyVU video is perfectly aligned with Oracle’s message under the tagline “Modern Demands Need Modern Systems.” It presents thoughts from senior executives from across Vanderbilt’s HR, Finance, IT, and academic leadership team that will resonate across industries.

You might not be in Higher Ed, but don't let that stop you from watching the video. Vanderbilt is the second largest private employer in the state of Tennessee.

Log Buffer #474: A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2016-05-18 14:46

This Log Buffer Edition covers Oracle, SQL Server and MySQL blogs from across the planet.

Oracle:

You might be thinking “Nothing is more frustrating that encountering a string column that is full of dates”. But there is something worse than that.

Unique constraint WWV_FLOW_WORKSHEET_RPTS_UK violated.

Understanding query slowness after platform change

Database Migration and Integration using AWS DMS

Oracle BPM 12c: Browsing the SOAINFRA

SQL Server:

Adding PK Exceptions to SQLCop Tests

Is a RID Lookup faster than a Key Lookup?

Performance Surprises and Assumptions : DATEADD()

Generate INSERT scripts from SQL Server queries and stored procedure output

PowerShell Desired State Configuration: Pull Mode

Continuous Delivery from the 19th Century to TODAY

MySQL:

ProxySQL versus MaxScale for OLTP RO workloads

Properly removing users in MySQL

MySQL/MariaDB cursors and temp tables

Quick start MySQL testing using Docker (on a Mac!)

Query Rewrite plugin can harm performance

Categories: DBA Blogs

I’m having issues with comment spam

DBMS2 - Wed, 2016-05-18 14:12

My blogs are having a bad time with comment spam. While Akismet and other safeguards are intercepting almost all of the ~5000 attempted spam comments per day, the small fraction that get through are still a large absolute number to deal with.

There’s some danger I’ll need to restrict comments here to combat it. (At the moment they’ve been turned off almost entirely on Text Technologies, which may be awkward if I want to put a post up there rather than here.) If I do, I’ll say so in a separate post. I apologize in advance for any inconvenience.

Categories: Other

Under the Covers of OBIEE 12c Configuration with sysdig

Rittman Mead Consulting - Wed, 2016-05-18 10:57

OBIEE 12c has changed quite a lot in how it manages configuration. In OBIEE 11g configuration was based around system MBeans and the biee-domain.xml as the master copy of settings – and if you updated a configuration directly that was centrally managed, it would get reverted back. Now in OBIEE 12c configuration can be managed directly in text files again – but also through EM still (not to mention WLST). Confused? Yep, I was.

In the configuration files such as NQSConfig.INI there are settings still marked with the ominous comment:

# This Configuration setting is managed by Oracle Enterprise Manager Fusion Middleware Control

In 11g this meant – dragons be here; turn back all ye who don’t want to have your configuration settings wiped next time the stack boots.

Now in 12c, I can make a configuration change (such as enabling BI Server caching), restart the affected component, and the change will take affect — and persist through a restart of the whole OBIEE stack. All good.

1__oracle_demo____ssh__and_training-material-obiee__Git_
But … the fly in the ointment. If I restart just the affected component (for example, BI Server for an NQSConfig.INI change), since I don’t want to waste time bouncing the whole stack if I don’t need to, then Enterprise Manager will continue to show the old setting:

54_170_157_117

So even though in fact the cache is enabled (and I can see entries being populated in it), Enterprise Manager suggests that it’s not. Confusing.

So … if we’re going to edit configuration files by hand (and personally I prefer to, since it saves firing up a web browser), we need to know how to make sure Enterprise Manager will to reflect the change too. Does EM poll the file whilst running? Or something direct to each component to request the configuration? Or maybe it just reads the file on startup only?

Enter sysdig! What I’m about to use it for is pretty darn trivial (and could probably be done with other standard *nix tools), but is still a useful example. What we want to know is which process reads NQSConfig.INI, and from there isolate the particular component that we need to restart to get it to trigger a re-read of the file and thus correctly show the value in Enterprise Manager.

I ran sysdig with a filter for filename and custom output format to include the process PID:

sudo sysdig -A -p "%evt.num %evt.time %evt.cpu %proc.name (%proc.pid) %evt.dir %evt.info" "fd.filename=NQSConfig.INI and evt.type=open"

Nothing was written (i.e. nothing was polling the file), until I bounced the full OBIEE stack ($DOMAIN_HOME/bitools/bin/stop.sh && $DOMAIN_HOME/bitools/bin/start.sh). During the startup of the AdminServer, sysdig showed:

32222110 12:00:49.912132008 3 java (10409) < fd=874(<f>/app/oracle/biee/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI) name=/app/oracle/biee/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI flags=1(O_RDONLY) mode=0

So – it’s the java process that reads it, PID 10409. Which is that?

$ ps -ef|grep 10409
oracle   10409 10358 99 11:59 ?        00:03:54 /usr/java/jdk1.8.0_51/bin/java -server -Xms512m -Xmx1024m -Dweblogic.Name=AdminServer [...]

It’s AdminServer — which makes sense, because Enterprise Manager is a java deployment hosted in AdminServer.

So, if you want to hack the config files by hand, restart either the whole OBIEE stack, or the affected component plus AdminServer in order for Enterprise Manager to pick up the change.

The post Under the Covers of OBIEE 12c Configuration with sysdig appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Resize filesystem

Jeff Moss - Wed, 2016-05-18 10:16