Sometimes, the most grievous data breaches are not incited by sophisticated cybercriminals using the latest hacking techniques, but everyday employees who ignore basic protocols.
Last year, Symantec and the Ponemon Institute conducted a study on data breaches that occurred throughout 2012. The two organizations discovered that an astounding two-thirds of these incidents were caused by human errors and system issues. Most of these situations were spawned by workers mishandling confidential information, organizations neglecting industry and government regulations and lackluster system controls.
"While external attackers and their evolving methods pose a great threat to companies, the dangers associated with the insider threat can be equally destructive and insidious," said Ponemon Institute Chairman Larry Ponemon. "Eight years of research on data breach costs has shown employee behavior to be one of the most pressing issues facing organizations today, up 22 percent since the first survey."
ITWire's David Williams noted that Facebook employees accidentally divulged the username and password of its MySQL database by using Pastebin.com. For those who aren't familiar with the service, Pastebin allows IT specialists to send bits of code via a compact URL, allowing professionals to share the code through an email, social media post or simple Web search.
As URLs are designed so that anyone can view a Web page, it's possible for a random individual to accidentally come across a URL created by Pastebin, allowing him or her to read the content within the URL. As it turns out, Sintehtic Labs' Nathan Malcolm learned that Facebook programmers were exchanging error logs and code snippets to one another through Pastebin.
By perusing the Pastebin URLs, Malcom discovered Facebook shell script and PHP code. Williams maintained that none of this data was obtained illegally, nor did he receive it from a Facebook engineers. Instead, the code was "simply lying around the Internet in public view."
It just so happened that one of the URLs contained source code that revealed Facebook's MySQL credentials. The server address, the database name as well as the username and password were available to the public. Although Facebook has likely changed these access permissions since the accident occurred, it's still an example of how neglect can lead to stolen information.
Implementing database security monitoring software is one thing, but ensuring workers are following policies that prevent data from accidentally being divulged to the public is another – it's a step that shouldn't be ignored.
The post How an employee mishap can reveal database login credentials appeared first on Remote DBA Experts.
When a professional says he or she specializes in Linux operating systems, some may be cheeky enough to ask "which one?"
The truth is, depending on how knowledgeable a Linux administrator is, he or she could create dozens of unique iterations of the OS. Generally, there are a handful that have either been developed by companies who then redistribute the open-source OS. Iterations vary depending on the functions and settings certain professionals require of the OS. Listed below are five different Linux distributions for servers.
According to Tecmint contributor Avishek Kumar, Debian is an OS that works best in the hands of system administrators or users possessing extensive experience with Linux. He described it as "extremely stable," making it a good option for servers. It has spawned several other iterations, Ubuntu and Kali being two of them.
2. SUSE Linux Enterprise Server
TechTarget's Sander Van Vugt lauded SUSE Linux as one of the most accessible Linux distributions available, also recognizing it for its administrator-friendly build. The latter feature may be due to its integration with Yet another Setup Tool, a Linux OS configuration program that enables admins to install software, configure hardware, develop networks and servers and several other much-needed tasks.
3. Red Hat Enterprise Linux
Kumar maintained that RHEL was the first Linux distribution designed for the commercial market, and is compatible with x86 and x86_64 server architectures. Due to the support that Red Hat provides for this OS, it is often the server OS of choice for many sysadmins. The only "drawback" of this solution is that it isn't available for free distribution, although a beta release can be downloaded for educational use.
4. Kali Linux
As was mentioned above, this particular iteration is an offshoot of Debian. While not necessarily recommended for servers (and one of the latest Linux distributions) it has primarily been developed to conduct penetration testing. One of the advantages associated with Kali is that Debian's binary packages can be installed on Kali. It serves as a fantastic security assessment program for users concerned with database or WiFi security.
5. Arch Linux
Kumar maintained that one of the advantages associated with Arch is that it is designed as a rolling release OS, meaning every time a new version is unrolled, those who have already installed it won't have to re-install the program again. It is designed for the X86 processor architecture.
Factoring a mobile workforce into a business's enterprise application infrastructure is a consideration many CIOs are making nowadays.
Bring-your-own-device has a number of implications regarding database security, accessibility, operating system compatibility and a wealth of other factors. Constructing and maintaining an ecosystem designed to accommodate personnel using mobile devices to access enterprise software through public networks is more than a best practice – it's a necessity.
Oracle makes enterprise mobility a little easier
Enterprises using Oracle's E-Business Suite applications would do well to regard the developer's Mobile Application Framework, which allows developers to create single-source mobile apps capable of being deployed across multiple OSes. Nation Multimedia reported that MAF provides programmers with a set of tools that allows them to fabricate software that can satisfy the demands incited by the mobile workforce.
Oracle Asia Pacific Vice President for Asean Fusion Middleware Sales Chin Ying Loong spoke with the source, asserting that enterprises need platforms that allow them to provide apps through whatever devices their employees choose to use, whether they be Apple tablets or Android phones.
"The trick for organizations today is to implement their own end-to-end mobile platforms, and to keep things simple," said Loong, as quoted by Nation Multimedia. "Simplicity is crucial to the rapid and effective integration of business data with user-friendly mobile applications. The cloud in particular offers businesses an excellent back-end platform to support their mobility solutions in a simple and cost-effective manner."
Has the mobile workforce really arrived?
BYOD isn't a trend of the future, but an occurrence of the present. MarketsandMarkets found that the enterprise mobility market will increase to $266.17 billion in 2019 at a compound annual growth rate of 25.5 percent from 2014 to 2019. IDC predicted that by next year, the number of mobile employees will reach 1.3 billion – approximately 37 percent of the global workforce.
Smart Dog Services' Alison Weiss commented on these statistics, acknowledging that the average IT department has a budget of $157.00 per device per worker, an expenditure that is anticipated to reach $242 per device per employee by 2016.
Given these developments, it's important for enterprises to consider which kind of applications personnel will attempt to access via mobile devices. For instance, cloud storage services for saving documents, enterprise resource planning software and customer relationship management solutions are all technologies mobile workers would strive to use while on the go.
Hi, welcome to RDX. You may think your disaster recovery strategy is rock solid, but is it as comprehensive as you would like it to be? Are you leaving any factors out of the equation?
Dimension Research recently conducted a survey of 453 IT and security pros based in the U.S. and Canada. The group discovered 79 percent of respondents experienced a major IT blackout within the past two years. Of those participants, only 7 percent felt confident in their ability to deploy recovery strategies within two hours of an incident.
To ensure information is transferred to functional facilities in the event of a disaster, enterprises would benefit from collaborating with remote DBAs. These professionals can help detail every aspect of the DR initiative and outline how continuity can be maintained.
Thanks for watching!
Hi, welcome to RDX! Firewalls, intrusion detection systems and database access security are all necessary for protecting information. However, some professionals are saying businesses could be doing more to deter hackers.
For example, why not make it difficult for them to infiltrate systems? Amit Yoran, a former incident response expert at the U.S. Department of Defense, believes data analysis programs must be leveraged to not only identify threats, but map out sequences of events.
Once complex infiltration strategies are understood, embedded database engines can deploy counter-attacks that exploit hackers' vulnerabilities. This allows organizations to effectively dismantle complex infiltration endeavors while enabling them to reinforce existing defenses.
Thanks for watching! For more advice on database security, be sure to check in!
Hi, welcome to RDX! The holidays are underway, meaning shopping mall and e-commerce traffic is booming. It also means that hackers are redirecting their attention to retail point-of-sale systems.
Last year, cybercriminals were attacking databases holding credit and debit card information. However, their attention is being directed elsewhere. NuData Security's Ryan Wilk maintained that hackers are focusing on servers that are hosting user accounts. For instance, if a thief were to target a person's Amazon account, he or she would gain access not only to their payment card info, but their home address and phone number as well.
There are two ways in which companies can prevent hackers from taking over accounts. First, installing a threat detection surveillance system is necessary. From there, businesses should send emails to account holders advising them to use stronger passwords.
Thanks for watching!
Hi, welcome to RDX! In the past, cybercriminals typically focused on operating systems and software written in C or C++. Now, they’re redirecting their attention to Web applications and services that were coded in languages such as Java and .NET.
One such attack, dubbed “Operation Aurora,” occurred in 2009. Allegedly, the initiative was conducted by hackers connected to the Chinese military. The perpetrators directed their attention toward Adobe, Rackspace and others to manipulate application source code.
How can enterprises prepare for these kinds of attacks? Backing up their applications and the data within those programs is the best course of action. In addition, companies should install malware detection programs to prevent software from being corrupted.
Thanks for watching! Be sure to check in again for more security news and tips.
The post Hackers targeting simple Web application vulnerabilities [VIDEO] appeared first on Remote DBA Experts.
Hi, welcome to RDX! Just before Sony Pictures was set to release “The Interview,” a previously unidentified group of hackers released confidential files stored in Sony’s databases.
“The Interview” is a comedy about a TV host ordered to assassinate North Korean dictator Kim Jong-un. After a two-week investigation the Federal Bureau of Investigation confirmed that the North Korean government is responsible for the data breach. As the film was a satire about Kim Jong-un’s regime, it makes sense that such a damaging attack would originate from North Korea.
From RDX’s perspective, deterring these kinds of attacks require businesses to install database security monitoring software. Any time an unauthorized user begins copying information, alerting database administrators is essential.
Thanks for watching!
The post FBI concludes North Korean hackers responsible for Sony breach [VIDEO] appeared first on Remote DBA Experts.
Hi, welcome to RDX! Many of you have probably heard of a Windows Server vulnerability that allows hackers to assign domain user accounts the same access privileges as administrator accounts.
As many Windows Server experts know, this enables attackers to easily infiltrate computers and other machines within a Windows Server domain. However, a hacker would have to possess accepted domain credentials to take advantage of the bug.
Thankfully, Microsoft released an update to Windows Server 2012 R2 and its predecessors to resolve the issue. This fix ensures that a Kerberos service ticket cannot be forged. Companies looking for Windows Server gurus with extensive experience in security should check out RDX’s Windows service package.
Thanks for watching!
Hi, welcome to RDX! At times, cybercriminals may be acting for political or nationalistic reasons. One hacker cell has been suspected of harboring such motivations.
Cylance, a cybersecurity research firm based out of California, reported the group has successfully infiltrated notable energy, defense and airline companies. The study’s authors warned that if attacks from the Iranian cell continue, it could impact the physical safety of world citizens. An Iranian diplomat informed news sources that Cylance’s assertion was unsubstantiated.
To help prevent cyber-attacks, it’s imperative that defense contractors, energy firms and other such businesses reevaluate their database security protocols. Applying monitoring tools capable of identifying anomalies is the first step, but proactively searching for bugs and applying patches is an absolute must.
Thanks for watching!
System administrators have favored Unix for its simplicity and versatility – two traits most open source programs possess. Not to mention, the operating system is free.
However, like any other piece of software, it's not without vulnerabilities. Welivesecurity noted researchers at ESET, CERT-Bund and other organizations discovered a comprehensive cybercriminal endeavor that compromised tens of thousands of Unix servers. The source advised sysadmins to make a thorough assessment of their Unix machines and re-install the OS if they believe them to be infected.
This instance is an example of the dangers Unix users face. This doesn't mean such professionals should panic. Generally, there are four soft skills Unix specialists should possess in order to ensure the OS is performing optimally, and is devoid of security vulnerabilities:
1. You're proactive
Implementing security monitoring tools that peruse Unix servers for bugs, malware and other negative discrepancies is a good practice to employ. In addition, regularly scrutinizing the performance of these machines helps experts glean insights as to which factors may be hindering efficiencies. Putting minor vulnerabilities on the back burner isn't a good practice unless you have higher priorities to address first. Never let these flaws go unnoticed.
2. You know the technology
This seems like a fairly obvious point that isn't worth mentioning, but it shouldn't be written off. ITWorld's Sandra Henry-Stocker maintained that knowing how the servers perform under normal conditions will provide professionals with some insights as to why Unix servers are using more memory during certain time periods, for example.
3. You take just enough time to know what went wrong
The big system crashes you've heard your colleagues discuss are inevitable. Sure, maintenance and attention could reduce the chances of such disasters occurring, but that doesn't mean they're not going to happen. So, after the problem has been resolved, take some time to assess what went wrong and how it could have been prevented. That being said, don't spend half the day trying to figure out the issue.
4. You document your work
Henry-Stocker advised Unix admins to create a brief outline of every tool they build with the operating system. Although it's not necessarily the most glamorous part of the job, doing so will help you:
- a) remember every step you took when constructing an app, and
- b) provide your colleagues with a reliable point of reference.
As you're most likely working in a team, making your co-workers' jobs that much easier can't hurt.
Hi, welcome to RDX! Banks, retailers and other organizations that use point-of-sale and payment software developed by Charge Anywhere should take extra precautions to ensure their databases are protected.
The software developer recently announced that it sustained a breach that may have compromised data that was produced as far back as 2009. This event reaffirms cybersecurity experts’ assertions that cybercriminals are targeting companies that provide payment software, as opposed to simply attacking merchants.
While it’s up to Charge Anywhere and other such enterprises to patch any bugs in their software, those using these programs should ensure their point-of-sale databases containing payment card info are strictly monitored. Informing DBAs on how to better manage their access credentials is another necessary step.
Thanks for watching!
The post Point-of-sale developers find themselves targets for cyberattacks [VIDEO] appeared first on Remote DBA Experts.
The Database Protection Series Continues – Evaluating the Most Common Threats and Vulnerabilities – Part 1
This is the second article of a series that focuses on securing your database data stores. In the introduction, I provide an overview of the database protection process and what will be discussed in future installments. Before we begin the activities required to secure our databases, we need to have a firm understanding of the most common database vulnerabilities and threats.
This two-part article is not intended to be an all-inclusive listing of database vulnerabilities and threat vectors. We’ll take a look at some of the more popular tactics used by hackers as well as some common database vulnerabilities. As we cover the topics, I’ll provide you with some helpful hints along the way to decrease your exposure. The list is not in any particular order. In future articles, I’ll refer back to this original listing from time-to-time to ensure that we continue to address them.Separation of Duties (Or Lack Thereof)
Every major industry regulation is going to have separation of duties as a compliance objective. If your organization complies with SOX, you should be well aware of the separation of duties requirements. In order for my organization to satisfy our PCI compliance objectives, we need to constantly ensure that no single person, or group, is totally in control of a security function. Since we don’t store or process PCI data, we focus mainly on securing the architecture. So, I’ll use my organization’s compliance activities to provide two quick examples:
- We assign the responsibility of security control design to an internal RDX team and security control review to a third-party auditing firm.
- Personnel assigned the responsibility of administering our internal systems, which include customer access auditing components, do not have privileges to access our customer’s systems.
The intent is to prevent conflict of interest, intentional fraud, collusion or unintentional errors to increase the vulnerability of our systems. For smaller organizations, this can be a challenge as it introduces additional complexity into the administration processes and can lead to an increase in staff requirements. The key is review all support functions related to the security and access of your systems and prioritize them according to the vulnerability created by misuse. Once the list is complete, you decompose the support activities into more granular activities and divide the responsibilities accordingly.Unidentified Data Stores Containing Sensitive Data
It’s a pretty simple premise – you can’t protect a database that you don’t know about. The larger the organization, the greater the chance is that sensitive data being is being stored and not protected. Most major database manufacturers provide scanners that allow you to identify all of their product’s installations. These are most often used during the dreaded licensing audits. As part of our database security service offering, RDX uses McAfee’s Vulnerability Scanner to identify all databases installed on the client’s network.
Once you identify these “rogue” data stores, your next goal is to find out what’s in them. This can be accomplished by asking the data owner to provide you with that information. A better strategy is to purchase one of the numerous data analyzers available on the market. The data analyzer executes sophisticated pattern matching algorithms to identify potentially sensitive data elements. Because of the complex matching process that has to occur, there aren’t a lot of free offerings on the web you can take advantage of. In our case, McAfee’s database scanner also includes the data identification feature. It helps us to uncover the sensitive data elements that are hidden in our customers’ database data stores.Clones from Production
Application developers have a particular affinity for wanting real world data to test with. Can you blame them? There’s a myriad of data variations they need to contend with. Cloning live environments allows them to focus on writing and testing their code and less on the mindless generation of information that attempts to mimic production data stores.
Cloning creates a whole host of vulnerabilities. The cloning process creates a duplicate of the production environment and it needs to be secured accordingly. In addition, application developers shouldn’t have access to sensitive production data. They’ll need access to the cloned systems to perform their work. The first step is to identify the sensitive elements and then create a strategy to secure them.
Data masking, also known as data scrambling, allows administrators to effectively secure cloned data stores. After the cloning process is performed, the administrator restricts access to the system until the data scrambling process is complete. The key to a successful masking process is to replace the original data with as realistic of a replacement as possible. Masking is not intended to be an alternative to encryption, its intent is to obscure the original values stored in the system.
There are several types of data masking offerings available. Your first step should be to check your database product’s feature list. Oracle, for example, provides a data masking feature. There’s also a wealth of third-party products available to you. If you have limited funding, search the internet for database masking and you’ll find lots of free alternatives. Use a strong level of due diligence if you have to use a free alternative and check the data before and after scrambling – no matter which option you choose.
If your cloning process creates any files that contain data, most often used to transfer the database from the source to target, wipe them out after the cloning process is complete. Lastly, perform an in-depth account security review. Remove all accounts that aren’t needed for testing and only create the new ones needed to provide the required functionality to your application developers. In part 2 of this article, we’ll discuss backup, output and load files. You will also need to secure your cloned systems’ files accordingly.Default, Blank and Weak Passwords
Years ago, after my career as an Oracle instructor, I became an industry consultant. One of my company’s offerings was the database assessment. I reviewed customers’ environments to ensure they were optimized for performance, availability and security. Those were the days when breaches, and the resulting attention paid to security, were far less prominent than they are today. I’d bring up a logon screen to the database and attempt to log in using the default passwords that were available in Oracle. At that time there were about a dozen or so accounts automatically available after database creation. I always loved the reaction of the client as they watched me successfully access their environment. At each login, I’d say “this one gives me DBA”, “this one gives me access to all of your data tables”…. You can’t believe how many times I successfully logged in using sys/change_on_install.
Although Oracle, like most major database vendors, have ratcheted down on accounts, default and weak passwords are still a problem. The database documentation will contain a listing of the accounts that are automatically created during installation. Some advanced features will also require accounts to be activated. After database creation and every time you install a new feature, do a quick scan of the documentation and then select from the users table to see if you have additional accounts to secure.
All major database vendors including Oracle, SQL Server, MySQL and DB2 provide password complexity mechanisms. Some of them are automatically active when the database is created while others must be manually implemented. In addition, most allow you to increase the complexity by altering the code.
Once your complexity strategy is underway, you’ll need to use a password vault to store your credentials. I’m particularly fond of vaults that also provide a password generator. The password vault’s feature list will be important. When you perform your analysis of password vaults some of the more important features to focus on are: how the password vault enforces its own security, logging and auditing features available, encryption at rest and during transfer, backup encryption, early-warning systems, dual launch key capabilities (takes 2 personnel to check out password), automatic notification when a password is checked out, how it handles separation of duties and if it can record the actions taken on the targets after the credentials are accessed.Unencrypted Data At Rest
Database encryption, if administered correctly, provides a strong defense against data theft. Most major database vendors provide data encryption as part of the product’s feature set. Microsoft SQL Server and Oracle call theirs Transparent Data Encryption (TDE), IBM provides a few alternatives including Infosphere Guardium and MySQL provides a set of functions that perform data encryption.
You’ll also need to determine what data you want to encrypt as well as how you want to do it. Most of the vendor offerings allow you to encrypt data at different levels including column, table, file and database. Like most database features, you will need to balance security with processing overhead. Most encryption alternatives will add enough overhead to impact database performance. Identify the sensitive data elements and encrypt them.
Key management is crucial. You can use all the encryption you want, but your environment will still be vulnerable if you don’t perform effective key management. Here are a couple of RDX best practices for encryption key management:
- Keep your encryption algorithms up to date. Vendors release updates for their encryption features on a fairly regular basis. New database versions often contain significant security enhancements including new and improved encryption functionality. There should be no debate with anyone in your organization. If you are storing sensitive data, keep encryption functionality current.
- DB2, Oracle, SQL Server and MySQL all have strong encryption features. If your database product doesn’t, you’ll need to rely upon on a robust, third-party product. Encryption isn’t a feature that you want to skimp on or utilize a homegrown solution to provide.
- Store the keys securely in a safe, centralized location. During our security analysis, we have seen a few shops store their keys in the same areas as the data they are encrypting. Storing them in a centralized location allows you to lock that storage area down, provide separation of duties and activate access alerts.
- Key rotation is the tech term for changing your encryption key values on a regular basis. Depending on the database, this can be a fairly complex process. For other implementations, it’s fairly simple. Complex or simple, come up with a plan to rotate your keys at least yearly.
In part 2 of this article, we’ll cover unsecured data transmissions, securing input, output, report and backup files, SQL Injection and buffer overflow protection and a few other topics. We’ll then continue our discussion on the process of securing our sensitive database data stores by outlining the key elements of a database security strategy.
For retailers, a vulnerability in their servers' operating systems could mean millions of dollars in losses, depending on how quick hackers are to react to newly discovered bugs.
As Linux is affordable, efficient and versatile, many e-commerce and brick-and-mortar merchants use the OS as their go-to system, According to Alert Logic's Tyler Borland and Stephen Coty. The duo noted Linux also provides a solid platform on which e-commerce and point-of-sale software can run smoothly.
The Grinch targeting Linux?
Due to Linux's popularity among retailers, it's imperative they assess a vulnerability that was recently discovered – a bug that has been given the nickname "Grinch" by researchers. Dark Reading's Kelly Jackson Higgins noted the fault hasn't been labeled as an "imminent threat," but it's possible that some malicious actors would be able to leverage Grinch to escalate permissions on Linux machines and then install malware.
Coty and Borland noted that Alert Logic's personnel discovered the bug, asserting that the bug exploits the "su" command, which enables a figure to masquerade as another user. The su command is part of the wheel user group. When a Linux solution is constructed, the default user is considered a member of the wheel group, providing them with administrative rights.
"Anyone who goes with a default configuration of Linux is susceptible to this bug," he told Jackson Higgins. "We haven't seen any active attacks on it as of yet, and that is why we wanted to get it patched before people started exploiting it."
Where the flaw lies
Jackson Higgins maintained the Grinch is "living" in the Polkit, a.k.a. PolicyKit for Linux. Polkit is a privilege management system that allows administrators to assign authorizations to general users. Coty and Borland outlined the two main concepts experts should glean from Polkit:
- One of Polkit's uses lies in the ability to determine whether the program should initiate privileged operations for a user who requested the action to take place.
- Polkit access and task permission tools can identify multiple active sessions and seats, the latter of which is described as an "untrusted user's reboot request."
"Each piece of this ecosystem exposes possible vulnerabilities through backend D-Bus implementation, the front end Polkit daemon, or even userland tools that use Polkit for privilege authorization." wrote Coty and Borland.
Despite these concerns, Coty informed Jackson Higgins that this vulnerability won't have to be patched until after the holiday season, and only inexperienced Linux users are likely to encounter serious problems.
Hi, welcome to RDX! The Federal Bureau of Investigation recently sent a five-page document to businesses, warning them of a particularly destructive type of malware. It is believed the program was the same one used to infiltrate Sony's databases.
The FBI report detailed the malware's capabilities. Apparently, the software overrides all information on computer hard drives, including the master boot record. This could prevent servers from accessing critical software, such as operating systems or enterprise applications.
Database data can be lost or corrupted for many reasons. Regardless if the data loss was due to a hardware failure, human error or the deliberate act of a cybercriminal, database backups ensure that critical data can be quickly restored. RDX's backup and recovery experts are able to design well-thought out strategies that help organizations protect their databases from any type of unfortunate event.
Thanks for watching!
I’ve spent a few days playing with patching 126.96.36.199 with the so called “Database Patch for Engineered Systems and Database In-Memory”. Lets skip over why these not necessarily related feature sets should be bundled together into effectively a Bundle Patch.
First I was testing going from 188.8.131.52.1 to BP2 or 184.108.40.206.2. Then as soon as I’d done that of course BP3 was released.
So this is our starting position with BP1:
[oracle@rac2 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches 19392604;OCW PATCH SET UPDATE : 220.127.116.11.1 (19392604) 19392590;ACFS Patch Set Update : 18.104.22.168.1 (19392590) 19189240;DATABASE BUNDLE PATCH : 22.214.171.124.1 (19189240)
[oracle@rac2 ~]$ /u01/app/oracle/product/126.96.36.199/db_1/OPatch/opatch lspatches 19392604;OCW PATCH SET UPDATE : 188.8.131.52.1 (19392604) 19189240;DATABASE BUNDLE PATCH : 184.108.40.206.1 (19189240)
Simple enough, right? BP1 and the individual patch components within BP1 give you 220.127.116.11.1. Even I can follow this.
Lets try and apply BP2 to the above. We will use opatchauto for this, and to begin with we will run an analyze:
[root@rac2 ~]# /u01/app/12.1.0/grid_1/OPatch/opatchauto apply -analyze /tmp/BP2/19774304 -ocmrf /tmp/ocm.rsp OPatch Automation Tool Copyright (c) 2014, Oracle Corporation. All rights reserved. OPatchauto version : 18.104.22.168.5 OUI version : 22.214.171.124.0 Running from : /u01/app/12.1.0/grid_1 opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/19774304/opatch_gi_2014-12-18_13-35-17_analyze.log NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system. Parameter Validation: Successful Grid Infrastructure home: /u01/app/12.1.0/grid_1 RAC home(s): /u01/app/oracle/product/126.96.36.199/db_1 Configuration Validation: Successful Patch Location: /tmp/BP2/19774304 Grid Infrastructure Patch(es): 19392590 19392604 19649591 RAC Patch(es): 19392604 19649591 Patch Validation: Successful Analyzing patch(es) on "/u01/app/oracle/product/188.8.131.52/db_1" ... Patch "/tmp/BP2/19774304/19392604" analyzed on "/u01/app/oracle/product/184.108.40.206/db_1" with warning for apply. Patch "/tmp/BP2/19774304/19649591" analyzed on "/u01/app/oracle/product/220.127.116.11/db_1" with warning for apply. Analyzing patch(es) on "/u01/app/12.1.0/grid_1" ... Patch "/tmp/BP2/19774304/19392590" analyzed on "/u01/app/12.1.0/grid_1" with warning for apply. Patch "/tmp/BP2/19774304/19392604" analyzed on "/u01/app/12.1.0/grid_1" with warning for apply. Patch "/tmp/BP2/19774304/19649591" analyzed on "/u01/app/12.1.0/grid_1" with warning for apply. SQL changes, if any, are analyzed successfully on the following database(s): TESTRAC Apply Summary: opatchauto ran into some warnings during analyze (Please see log file for details): GI Home: /u01/app/12.1.0/grid_1: 19392590, 19392604, 19649591 RAC Home: /u01/app/oracle/product/18.104.22.168/db_1: 19392604, 19649591 opatchauto completed with warnings.
Well, that does not look promising. I have no “one-off” patches in this home to cause a conflict, it should be a simple BP1->BP2 patching without any issues.
Digging into the logs we find the following:
. . . [18-Dec-2014 13:37:08] Verifying environment and performing prerequisite checks... [18-Dec-2014 13:37:09] Patches to apply -> [ 19392590 19392604 19649591 ] [18-Dec-2014 13:37:09] Identical patches to filter -> [ 19392590 19392604 ] [18-Dec-2014 13:37:09] The following patches are identical and are skipped: [18-Dec-2014 13:37:09] [ 19392590 19392604 ] . .
Essentially out of the 3 patches in the home at BP1 only the Database Bundle Patch 19189240 is superseded by BP2. Maybe this annoys me more than it should. I like my patches applied by BP2 to end in 2. I also don’t like the fact the analyze throws a warning about this.
[root@rac2 ~]# /u01/app/12.1.0/grid_1/OPatch/opatchauto apply /tmp/BP2/19774304 -ocmrf /tmp/ocm.rsp OPatch Automation Tool Copyright (c) 2014, Oracle Corporation. All rights reserved. OPatchauto version : 22.214.171.124.5 OUI version : 126.96.36.199.0 Running from : /u01/app/12.1.0/grid_1 opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/19774304/opatch_gi_2014-12-18_13-54-03_deploy.log Parameter Validation: Successful Grid Infrastructure home: /u01/app/12.1.0/grid_1 RAC home(s): /u01/app/oracle/product/188.8.131.52/db_1 Configuration Validation: Successful Patch Location: /tmp/BP2/19774304 Grid Infrastructure Patch(es): 19392590 19392604 19649591 RAC Patch(es): 19392604 19649591 Patch Validation: Successful Stopping RAC (/u01/app/oracle/product/184.108.40.206/db_1) ... Successful Following database(s) and/or service(s) were stopped and will be restarted later during the session: testrac Applying patch(es) to "/u01/app/oracle/product/220.127.116.11/db_1" ... Patch "/tmp/BP2/19774304/19392604" applied to "/u01/app/oracle/product/18.104.22.168/db_1" with warning. Patch "/tmp/BP2/19774304/19649591" applied to "/u01/app/oracle/product/22.214.171.124/db_1" with warning. Stopping CRS ... Successful Applying patch(es) to "/u01/app/12.1.0/grid_1" ... Patch "/tmp/BP2/19774304/19392590" applied to "/u01/app/12.1.0/grid_1" with warning. Patch "/tmp/BP2/19774304/19392604" applied to "/u01/app/12.1.0/grid_1" with warning. Patch "/tmp/BP2/19774304/19649591" applied to "/u01/app/12.1.0/grid_1" with warning. Starting CRS ... Successful Starting RAC (/u01/app/oracle/product/126.96.36.199/db_1) ... Successful SQL changes, if any, are applied successfully on the following database(s): TESTRAC Apply Summary: opatchauto ran into some warnings during patch installation (Please see log file for details): GI Home: /u01/app/12.1.0/grid_1: 19392590, 19392604, 19649591 RAC Home: /u01/app/oracle/product/188.8.131.52/db_1: 19392604, 19649591 opatchauto completed with warnings.
I do not like to see warnings when I’m patching. The log file for the apply is similar to the analyze, identical patches skipped.
Checking where we are with GI and DB patches now:
[oracle@rac2 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches 19649591;DATABASE BUNDLE PATCH : 184.108.40.206.2 (19649591) 19392604;OCW PATCH SET UPDATE : 220.127.116.11.1 (19392604) 19392590;ACFS Patch Set Update : 18.104.22.168.1 (19392590) [oracle@rac2 ~]$ /u01/app/oracle/product/22.214.171.124/db_1/OPatch/opatch lspatches 19649591;DATABASE BUNDLE PATCH : 126.96.36.199.2 (19649591) 19392604;OCW PATCH SET UPDATE : 188.8.131.52.1 (19392604)
The only one changed is the DATABASE BUNDLE PATCH.
The one MOS document I effectively have on “speed dial” is 888828.1 and that showed up BP3 as being available 17th December. It also had the following warning:
Before install on top of 184.108.40.206.1DBBP or 220.127.116.11.2DBBP, first rollback patch 19392604 OCW PATCH SET UPDATE : 18.104.22.168.1
[root@rac2 ~]# /u01/app/12.1.0/grid_1/OPatch/opatchauto apply -analyze /tmp/BP3/20026159 -ocmrf /tmp/ocm.rsp OPatch Automation Tool Copyright (c) 2014, Oracle Corporation. All rights reserved. OPatchauto version : 22.214.171.124.5 OUI version : 126.96.36.199.0 Running from : /u01/app/12.1.0/grid_1 opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/20026159/opatch_gi_2014-12-18_14-13-58_analyze.log NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system. Parameter Validation: Successful Grid Infrastructure home: /u01/app/12.1.0/grid_1 RAC home(s): /u01/app/oracle/product/188.8.131.52/db_1 Configuration Validation: Successful Patch Location: /tmp/BP3/20026159 Grid Infrastructure Patch(es): 19392590 19878106 20157569 RAC Patch(es): 19878106 20157569 Patch Validation: Successful Command "/u01/app/12.1.0/grid_1/OPatch/opatch prereq CheckConflictAgainstOH -ph /tmp/BP3/20026159/19878106 -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1" execution failed Log file Location for the failed command: /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/opatch2014-12-18_14-14-50PM_1.log Analyzing patch(es) on "/u01/app/oracle/product/184.108.40.206/db_1" ... Patch "/tmp/BP3/20026159/19878106" analyzed on "/u01/app/oracle/product/220.127.116.11/db_1" with warning for apply. Patch "/tmp/BP3/20026159/20157569" analyzed on "/u01/app/oracle/product/18.104.22.168/db_1" with warning for apply. Analyzing patch(es) on "/u01/app/12.1.0/grid_1" ... Command "/u01/app/12.1.0/grid_1/OPatch/opatch napply -phBaseFile /tmp/OraGI12Home2_patchList -local -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1 -silent -report -ocmrf /tmp/ocm.rsp" execution failed: UtilSession failed: After skipping conflicting patches, there is no patch to apply. Log file Location for the failed command: /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/opatch2014-12-18_14-15-30PM_1.log Following step(s) failed during analysis: /u01/app/12.1.0/grid_1/OPatch/opatch prereq CheckConflictAgainstOH -ph /tmp/BP3/20026159/19878106 -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1 /u01/app/12.1.0/grid_1/OPatch/opatch napply -phBaseFile /tmp/OraGI12Home2_patchList -local -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1 -silent -report -ocmrf /tmp/ocm.rsp SQL changes, if any, are analyzed successfully on the following database(s): TESTRAC Apply Summary: opatchauto ran into some warnings during analyze (Please see log file for details): RAC Home: /u01/app/oracle/product/22.214.171.124/db_1: 19878106, 20157569 Following patch(es) failed to be analyzed: GI Home: /u01/app/12.1.0/grid_1: 19392590, 19878106, 20157569 opatchauto analysis reports error(s).
Looking at the log file we see patch 19392604 already in the home conflicts with patch 19878106 from BP3. 19392604 is the OCW PATCH SET UPDATE in BP1 (and BP2) while 19878106 is the Database Bundle Patch in BP3. We see the following in the log file:
Patch 19878106 has Generic Conflict with 19392604. Conflicting files are : /u01/app/12.1.0/grid_1/bin/diskmon
That seems messy. It definitely annoys me that to apply BP3 I have to take additional steps of rolling back a pervious BP. I don’t recall having to do this with previous Bundle Patches, and I’ve applied a fair few of them.
I rolled the lot back with opatchauto rollback. Then applied BP3 ontop of the unpatched homes I was left with. But lets look at what BP3 on top of 126.96.36.199 gives you:
[oracle@rac1 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches 20157569;OCW Patch Set Update : 188.8.131.52.1 (20157569) 19878106;DATABASE BUNDLE PATCH: 184.108.40.206.3 (19878106) 19392590;ACFS Patch Set Update : 220.127.116.11.1 (19392590)
[oracle@rac1 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches 20157569;OCW Patch Set Update : 18.104.22.168.1 (20157569) 19878106;DATABASE BUNDLE PATCH: 22.214.171.124.3 (19878106)
So for BP2 we had patch 19392604 OCW PATCH SET UPDATE : 126.96.36.199.1 Now we still have a 188.8.131.52.1 OCW Patch Set Update with BP3 but it has a new patch number!
That really irritates.
Enterprises looking for Oracle experts knowledgeable of the software giant's latest database solutions may discover some DBAs' certifications are no longer valid.
In addition, companies using SAP applications would do well to hire DBAs who know how to optimize Oracle's server solutions. Many SAP programs leverage Oracle 12c databases as the underbelly of their functionality,so ensuring that SAP's software can use and secure data within these environments efficiently is a must.
Releasing new accreditation standards
TechTarget's Jessica Sirkin commented on Oracle's certification requirements, which now state that DBAs must take tests within one year in order to revamp their accreditations. The exams vet a person's familiarity with more recent versions of Oracle Database. Those with certifications in 7.3, 8, 8i and 9i must undergo tests to obtain accreditations in 10g, 11g or 12c to retain their active status within Oracle's CertView portal system.
These rules apply to any professional holding a Certified Associate, Professional, Expert or Master credential in the aforementioned solutions. Those already possessing accreditation in 12c or 11g will be able to retain their active statuses for the foreseeable future, but DBAs with 10g certifications will be removed from the CertView list on March 1 of next year.
As for the company's reasons, Oracle publicly stated that the measures are intended to "have qualified people implementing, maintaining and troubleshooting our software."
SAP gets closer to Oracle
Those who use Oracle's databases to power their SAP software are in luck. Earlier this year, SAP certified its solutions to coincide with the rollout of an updated version of Oracle 12c, specifically build 184.108.40.206. One of the reasons why SAP is supporting Oracle's flagship database is because the company wants to provide its customers with more flexible upgrade plans from 11g Release 2 to Oracle's latest release. SAP's supporting features will include the following:
- A multitenancy option for 12c, which will allow users to view one piece of information or use one particular function simultaneously.
- Hybrid columnar compression technology, which will certainly help those who are trying to engineer back-end databases to store more information.
Most importantly, the news source acknowledged the fact that many businesses use Oracle's database products in conjunction with SAP's enterprise software. Incompatibility between the two has been a persistent headache for IT departments working with these two solutions, but SAP's move will improve the matter.
Overall, hiring a team of database experts experienced in working with different software running on top of Oracle is a safe bet for organizations wary of this change.
The post Oracle’s revises its database administrator accreditation portfolio appeared first on Remote DBA Experts.
While Oracle's database engine and Microsoft's SQL Server are among the top three server solutions among enterprises, by no means is PostgreSQL being left in the dust.
This year, The PostgreSQL Global Development Group released PostgreSQL 9.4, which was equipped with several bug fixes as well as a few new capabilities, such as:
- An ALTER SYSTEM command can be used to change configuration file entries
- It's now possible to bring up materialized views to be refreshed without deflecting concurrent reads
- Logical decoding for WAL data was added
- Background worker processes can now be initiated, logged and terminated dynamically
PostgreSQL 9.4 is currently available for download on the developer's website. Users can access versions that are compatible with specific operating systems, including SUSE, Ubuntu, Solaris, Windows and Mac OS X.
Rising to fame?
ZDNet contributor Toby Wolpe spoke with EnterpriseDB chief architect Dave Page, who is also a member of PostgreSQL's core team, on the open source solution's popularity. He maintained that PostgreSQL's capabilities are catching up to Oracle's, an assertion that may not be shared by everyone but one worth acknowledging nonetheless.
Page referenced PostgreSQL as "one of those well kept secrets that people just haven't cottoned on to." one of the reasons why PostgreSQL has gained so much traction lately is due to MySQL's purchase by Sun, which was acquired by Oracle. According to Page, people are apprehensive regarding Oracle's control of MySQL, another relational database engine.
Throughout his interview with Wolpe, Page noted a general sentiment among DBAs who are switching from MySQL to PostgreSQL for the latter solution's "feature-rich" content. In a sense, it makes sense that PostgreSQL would have plenty of functions to offer DBAs due to its open source format. When users can customize aspects of PostgreSQL that not only complement their workflow but that of other DBAs as well, such capabilities tend to be integrated into the next release permanently.
One particular feature that has managed to stick in PostgreSQL is foreign data wrappers. Wolpe noted that data wrappers enable remote information to be categorized as a table within PostgreSQL, meaning queries can be run across both PostgreSQL tables and foreign data as if it were native.
Another tool provides support for JSONB data, allowing information to be stored within PostgreSQL in a binary format. The advantage of this function is that it initiates a new index operator that's quick-acting.
While PostgreSQL may not be the engine of choice for some DBAs, it is a solution worth acknowledging.
Effective disaster recovery plans admittedly take a lot of time, resources and attention to develop, which may cause some small and mid-sized businesses to stray away from the practice. While it's easy to think "it could never happen to me," that's certainly not a good mindset to possess.
While the sole proprietor of a small graphic design operation may want to set up a disaster recovery plan, he or she may not know where to start. It's possible that the application used to create designs resides in-house, but the system used to deliver content to customers may be hosted through the cloud. It's a confusing situation, especially if one doesn't have experience in IT.
SMEs need DR, but don't have robust plans
To understand how strong small and mid-sized enterprises' DR strategies are, Dimensional Research and Axcient conducted a survey of 453 IT professionals working at companies possessing between 50 and 1000 workers. The study found that 71 percent of respondents back up both information and software, but only 24 percent back up all their data and applications. Other notable discoveries are listed below:
- A mere 7 percent of survey participants felt "very confident" that they could reboot operations within two hours of an incident occurring.
- More than half (53 percent) of respondents asserted company revenues would be lost until critical systems could be rebooted.
- Exactly 61 percent of SMBs use backup and recovery tools that perform the same functions.
- Almost three-fourths maintain that using multiple DR assets can increase the risk of endeavors failing.
- Eighty-nine percent surveyed view cloud-based DR strategies as incredibly desirable. The same percentage acknowledged that business workers are much less productive during outages.
What measures can SMBs take?
For many IT departments at SMEs, taking advantage of cloud-based DR plans can be incredibly advantageous. IT Business Edge's Kim Mays noted that decision-makers should pay close attention to the information and applications employees access most often to perform day-to-day tasks. Allowing these IT assets to transition to cloud infrastructures in the event of a disaster will allow workers to continue with their responsibilities.
Of course, using a cloud-based strategy isn't the be-all, end-all to a solid DR blueprint. For instance, it's possible that personnel residing in affected areas may not have Internet access. This is where a business' senior management comes into play: Set guidelines that will allow staff to make decisions that will benefit the company during an outage.
The post Yes, SMBs should pay attention to disaster recovery appeared first on Remote DBA Experts.
Enterprises using Linux operating systems to run servers or desktops may want to consider hiring specialists to prevent actions initiated by the "less" command.
In addition, Linux users should also be aware that they have been targeted by a dangerous cyberespionage operation that is believed to be headquartered in Russia. If these two threats go unacknowledged, enterprises that use Linux may sustain grievous data breaches.
A bug in the "less" command
The vulnerability concerning less was detailed by Lucian Constantin, a contributor to Computerworld. Constantin noted that less presents itself as a "harmless" instruction that enables users to view the contents of files downloaded from the Web. However, using the less directive could also allow perpetrators to execute code remotely.
Less is typically used to view information without having to load files into a computer's memory, a huge help for those simply browsing documents on the Internet. However, lesspipe is a script that automatically accesses third-party tools to process files with miscellaneous extensions such as .pdf, .gz, .xpi, and so on.
One such tool, cpio file archiving, could enable a cybercriminal to initiate an arbitrary code execution exploit. Essentially, this would give him or her control over a machine, enabling them to manipulate it at will. This particularly bug was discovered by Michal Zalewski, a Google security engineer.
"While it's a single bug in cpio, I have no doubt that many of the other lesspipe programs are equally problematic or worse," said Zalewski, as quoted by Constantin.
Taking aim and firing
The less command isn't the only thing Linux users should be concerned with. In a separate piece for PCWorld, Constantin noted that Russian cyberespionage group Epic Turla has directed its attention toward infiltrating machines running Linux.
Kaspersky Lab asserted Epic Turla is taking advantage of cd00r, an open-source backdoor program that was created in 2000. This particular tool enables users to initiate arbitrary directives, as well as "listen" to commands received via a transmission control protocol, or user datagram protocol – the perfect function that makes it a dangerous espionage asset.
"It can't be discovered via netstat, a commonly used administrative tool," said Kaspersky researchers, as quoted by Constantin. "We suspect that this component was running for years at a victim site, but do not have concrete data to support that statement just yet."
If Linux users want to secure their systems, consulting with specialists certified in the OS may not be a bad idea.
The post Linux users may need experts to reinforce malware detection functions appeared first on Remote DBA Experts.