Hi, welcome to RDX! In the past, cybercriminals typically focused on operating systems and software written in C or C++. Now, they’re redirecting their attention to Web applications and services that were coded in languages such as Java and .NET.
One such attack, dubbed “Operation Aurora,” occurred in 2009. Allegedly, the initiative was conducted by hackers connected to the Chinese military. The perpetrators directed their attention toward Adobe, Rackspace and others to manipulate application source code.
How can enterprises prepare for these kinds of attacks? Backing up their applications and the data within those programs is the best course of action. In addition, companies should install malware detection programs to prevent software from being corrupted.
Thanks for watching! Be sure to check in again for more security news and tips.
The post Hackers targeting simple Web application vulnerabilities [VIDEO] appeared first on Remote DBA Experts.
Hi, welcome to RDX! Just before Sony Pictures was set to release “The Interview,” a previously unidentified group of hackers released confidential files stored in Sony’s databases.
“The Interview” is a comedy about a TV host ordered to assassinate North Korean dictator Kim Jong-un. After a two-week investigation the Federal Bureau of Investigation confirmed that the North Korean government is responsible for the data breach. As the film was a satire about Kim Jong-un’s regime, it makes sense that such a damaging attack would originate from North Korea.
From RDX’s perspective, deterring these kinds of attacks require businesses to install database security monitoring software. Any time an unauthorized user begins copying information, alerting database administrators is essential.
Thanks for watching!
The post FBI concludes North Korean hackers responsible for Sony breach [VIDEO] appeared first on Remote DBA Experts.
Hi, welcome to RDX! Many of you have probably heard of a Windows Server vulnerability that allows hackers to assign domain user accounts the same access privileges as administrator accounts.
As many Windows Server experts know, this enables attackers to easily infiltrate computers and other machines within a Windows Server domain. However, a hacker would have to possess accepted domain credentials to take advantage of the bug.
Thankfully, Microsoft released an update to Windows Server 2012 R2 and its predecessors to resolve the issue. This fix ensures that a Kerberos service ticket cannot be forged. Companies looking for Windows Server gurus with extensive experience in security should check out RDX’s Windows service package.
Thanks for watching!
Hi, welcome to RDX! At times, cybercriminals may be acting for political or nationalistic reasons. One hacker cell has been suspected of harboring such motivations.
Cylance, a cybersecurity research firm based out of California, reported the group has successfully infiltrated notable energy, defense and airline companies. The study’s authors warned that if attacks from the Iranian cell continue, it could impact the physical safety of world citizens. An Iranian diplomat informed news sources that Cylance’s assertion was unsubstantiated.
To help prevent cyber-attacks, it’s imperative that defense contractors, energy firms and other such businesses reevaluate their database security protocols. Applying monitoring tools capable of identifying anomalies is the first step, but proactively searching for bugs and applying patches is an absolute must.
Thanks for watching!
System administrators have favored Unix for its simplicity and versatility – two traits most open source programs possess. Not to mention, the operating system is free.
However, like any other piece of software, it's not without vulnerabilities. Welivesecurity noted researchers at ESET, CERT-Bund and other organizations discovered a comprehensive cybercriminal endeavor that compromised tens of thousands of Unix servers. The source advised sysadmins to make a thorough assessment of their Unix machines and re-install the OS if they believe them to be infected.
This instance is an example of the dangers Unix users face. This doesn't mean such professionals should panic. Generally, there are four soft skills Unix specialists should possess in order to ensure the OS is performing optimally, and is devoid of security vulnerabilities:
1. You're proactive
Implementing security monitoring tools that peruse Unix servers for bugs, malware and other negative discrepancies is a good practice to employ. In addition, regularly scrutinizing the performance of these machines helps experts glean insights as to which factors may be hindering efficiencies. Putting minor vulnerabilities on the back burner isn't a good practice unless you have higher priorities to address first. Never let these flaws go unnoticed.
2. You know the technology
This seems like a fairly obvious point that isn't worth mentioning, but it shouldn't be written off. ITWorld's Sandra Henry-Stocker maintained that knowing how the servers perform under normal conditions will provide professionals with some insights as to why Unix servers are using more memory during certain time periods, for example.
3. You take just enough time to know what went wrong
The big system crashes you've heard your colleagues discuss are inevitable. Sure, maintenance and attention could reduce the chances of such disasters occurring, but that doesn't mean they're not going to happen. So, after the problem has been resolved, take some time to assess what went wrong and how it could have been prevented. That being said, don't spend half the day trying to figure out the issue.
4. You document your work
Henry-Stocker advised Unix admins to create a brief outline of every tool they build with the operating system. Although it's not necessarily the most glamorous part of the job, doing so will help you:
- a) remember every step you took when constructing an app, and
- b) provide your colleagues with a reliable point of reference.
As you're most likely working in a team, making your co-workers' jobs that much easier can't hurt.
Hi, welcome to RDX! Banks, retailers and other organizations that use point-of-sale and payment software developed by Charge Anywhere should take extra precautions to ensure their databases are protected.
The software developer recently announced that it sustained a breach that may have compromised data that was produced as far back as 2009. This event reaffirms cybersecurity experts’ assertions that cybercriminals are targeting companies that provide payment software, as opposed to simply attacking merchants.
While it’s up to Charge Anywhere and other such enterprises to patch any bugs in their software, those using these programs should ensure their point-of-sale databases containing payment card info are strictly monitored. Informing DBAs on how to better manage their access credentials is another necessary step.
Thanks for watching!
The post Point-of-sale developers find themselves targets for cyberattacks [VIDEO] appeared first on Remote DBA Experts.
The Database Protection Series Continues – Evaluating the Most Common Threats and Vulnerabilities – Part 1
This is the second article of a series that focuses on securing your database data stores. In the introduction, I provide an overview of the database protection process and what will be discussed in future installments. Before we begin the activities required to secure our databases, we need to have a firm understanding of the most common database vulnerabilities and threats.
This two-part article is not intended to be an all-inclusive listing of database vulnerabilities and threat vectors. We’ll take a look at some of the more popular tactics used by hackers as well as some common database vulnerabilities. As we cover the topics, I’ll provide you with some helpful hints along the way to decrease your exposure. The list is not in any particular order. In future articles, I’ll refer back to this original listing from time-to-time to ensure that we continue to address them.Separation of Duties (Or Lack Thereof)
Every major industry regulation is going to have separation of duties as a compliance objective. If your organization complies with SOX, you should be well aware of the separation of duties requirements. In order for my organization to satisfy our PCI compliance objectives, we need to constantly ensure that no single person, or group, is totally in control of a security function. Since we don’t store or process PCI data, we focus mainly on securing the architecture. So, I’ll use my organization’s compliance activities to provide two quick examples:
- We assign the responsibility of security control design to an internal RDX team and security control review to a third-party auditing firm.
- Personnel assigned the responsibility of administering our internal systems, which include customer access auditing components, do not have privileges to access our customer’s systems.
The intent is to prevent conflict of interest, intentional fraud, collusion or unintentional errors to increase the vulnerability of our systems. For smaller organizations, this can be a challenge as it introduces additional complexity into the administration processes and can lead to an increase in staff requirements. The key is review all support functions related to the security and access of your systems and prioritize them according to the vulnerability created by misuse. Once the list is complete, you decompose the support activities into more granular activities and divide the responsibilities accordingly.Unidentified Data Stores Containing Sensitive Data
It’s a pretty simple premise – you can’t protect a database that you don’t know about. The larger the organization, the greater the chance is that sensitive data being is being stored and not protected. Most major database manufacturers provide scanners that allow you to identify all of their product’s installations. These are most often used during the dreaded licensing audits. As part of our database security service offering, RDX uses McAfee’s Vulnerability Scanner to identify all databases installed on the client’s network.
Once you identify these “rogue” data stores, your next goal is to find out what’s in them. This can be accomplished by asking the data owner to provide you with that information. A better strategy is to purchase one of the numerous data analyzers available on the market. The data analyzer executes sophisticated pattern matching algorithms to identify potentially sensitive data elements. Because of the complex matching process that has to occur, there aren’t a lot of free offerings on the web you can take advantage of. In our case, McAfee’s database scanner also includes the data identification feature. It helps us to uncover the sensitive data elements that are hidden in our customers’ database data stores.Clones from Production
Application developers have a particular affinity for wanting real world data to test with. Can you blame them? There’s a myriad of data variations they need to contend with. Cloning live environments allows them to focus on writing and testing their code and less on the mindless generation of information that attempts to mimic production data stores.
Cloning creates a whole host of vulnerabilities. The cloning process creates a duplicate of the production environment and it needs to be secured accordingly. In addition, application developers shouldn’t have access to sensitive production data. They’ll need access to the cloned systems to perform their work. The first step is to identify the sensitive elements and then create a strategy to secure them.
Data masking, also known as data scrambling, allows administrators to effectively secure cloned data stores. After the cloning process is performed, the administrator restricts access to the system until the data scrambling process is complete. The key to a successful masking process is to replace the original data with as realistic of a replacement as possible. Masking is not intended to be an alternative to encryption, its intent is to obscure the original values stored in the system.
There are several types of data masking offerings available. Your first step should be to check your database product’s feature list. Oracle, for example, provides a data masking feature. There’s also a wealth of third-party products available to you. If you have limited funding, search the internet for database masking and you’ll find lots of free alternatives. Use a strong level of due diligence if you have to use a free alternative and check the data before and after scrambling – no matter which option you choose.
If your cloning process creates any files that contain data, most often used to transfer the database from the source to target, wipe them out after the cloning process is complete. Lastly, perform an in-depth account security review. Remove all accounts that aren’t needed for testing and only create the new ones needed to provide the required functionality to your application developers. In part 2 of this article, we’ll discuss backup, output and load files. You will also need to secure your cloned systems’ files accordingly.Default, Blank and Weak Passwords
Years ago, after my career as an Oracle instructor, I became an industry consultant. One of my company’s offerings was the database assessment. I reviewed customers’ environments to ensure they were optimized for performance, availability and security. Those were the days when breaches, and the resulting attention paid to security, were far less prominent than they are today. I’d bring up a logon screen to the database and attempt to log in using the default passwords that were available in Oracle. At that time there were about a dozen or so accounts automatically available after database creation. I always loved the reaction of the client as they watched me successfully access their environment. At each login, I’d say “this one gives me DBA”, “this one gives me access to all of your data tables”…. You can’t believe how many times I successfully logged in using sys/change_on_install.
Although Oracle, like most major database vendors, have ratcheted down on accounts, default and weak passwords are still a problem. The database documentation will contain a listing of the accounts that are automatically created during installation. Some advanced features will also require accounts to be activated. After database creation and every time you install a new feature, do a quick scan of the documentation and then select from the users table to see if you have additional accounts to secure.
All major database vendors including Oracle, SQL Server, MySQL and DB2 provide password complexity mechanisms. Some of them are automatically active when the database is created while others must be manually implemented. In addition, most allow you to increase the complexity by altering the code.
Once your complexity strategy is underway, you’ll need to use a password vault to store your credentials. I’m particularly fond of vaults that also provide a password generator. The password vault’s feature list will be important. When you perform your analysis of password vaults some of the more important features to focus on are: how the password vault enforces its own security, logging and auditing features available, encryption at rest and during transfer, backup encryption, early-warning systems, dual launch key capabilities (takes 2 personnel to check out password), automatic notification when a password is checked out, how it handles separation of duties and if it can record the actions taken on the targets after the credentials are accessed.Unencrypted Data At Rest
Database encryption, if administered correctly, provides a strong defense against data theft. Most major database vendors provide data encryption as part of the product’s feature set. Microsoft SQL Server and Oracle call theirs Transparent Data Encryption (TDE), IBM provides a few alternatives including Infosphere Guardium and MySQL provides a set of functions that perform data encryption.
You’ll also need to determine what data you want to encrypt as well as how you want to do it. Most of the vendor offerings allow you to encrypt data at different levels including column, table, file and database. Like most database features, you will need to balance security with processing overhead. Most encryption alternatives will add enough overhead to impact database performance. Identify the sensitive data elements and encrypt them.
Key management is crucial. You can use all the encryption you want, but your environment will still be vulnerable if you don’t perform effective key management. Here are a couple of RDX best practices for encryption key management:
- Keep your encryption algorithms up to date. Vendors release updates for their encryption features on a fairly regular basis. New database versions often contain significant security enhancements including new and improved encryption functionality. There should be no debate with anyone in your organization. If you are storing sensitive data, keep encryption functionality current.
- DB2, Oracle, SQL Server and MySQL all have strong encryption features. If your database product doesn’t, you’ll need to rely upon on a robust, third-party product. Encryption isn’t a feature that you want to skimp on or utilize a homegrown solution to provide.
- Store the keys securely in a safe, centralized location. During our security analysis, we have seen a few shops store their keys in the same areas as the data they are encrypting. Storing them in a centralized location allows you to lock that storage area down, provide separation of duties and activate access alerts.
- Key rotation is the tech term for changing your encryption key values on a regular basis. Depending on the database, this can be a fairly complex process. For other implementations, it’s fairly simple. Complex or simple, come up with a plan to rotate your keys at least yearly.
In part 2 of this article, we’ll cover unsecured data transmissions, securing input, output, report and backup files, SQL Injection and buffer overflow protection and a few other topics. We’ll then continue our discussion on the process of securing our sensitive database data stores by outlining the key elements of a database security strategy.
For retailers, a vulnerability in their servers' operating systems could mean millions of dollars in losses, depending on how quick hackers are to react to newly discovered bugs.
As Linux is affordable, efficient and versatile, many e-commerce and brick-and-mortar merchants use the OS as their go-to system, According to Alert Logic's Tyler Borland and Stephen Coty. The duo noted Linux also provides a solid platform on which e-commerce and point-of-sale software can run smoothly.
The Grinch targeting Linux?
Due to Linux's popularity among retailers, it's imperative they assess a vulnerability that was recently discovered – a bug that has been given the nickname "Grinch" by researchers. Dark Reading's Kelly Jackson Higgins noted the fault hasn't been labeled as an "imminent threat," but it's possible that some malicious actors would be able to leverage Grinch to escalate permissions on Linux machines and then install malware.
Coty and Borland noted that Alert Logic's personnel discovered the bug, asserting that the bug exploits the "su" command, which enables a figure to masquerade as another user. The su command is part of the wheel user group. When a Linux solution is constructed, the default user is considered a member of the wheel group, providing them with administrative rights.
"Anyone who goes with a default configuration of Linux is susceptible to this bug," he told Jackson Higgins. "We haven't seen any active attacks on it as of yet, and that is why we wanted to get it patched before people started exploiting it."
Where the flaw lies
Jackson Higgins maintained the Grinch is "living" in the Polkit, a.k.a. PolicyKit for Linux. Polkit is a privilege management system that allows administrators to assign authorizations to general users. Coty and Borland outlined the two main concepts experts should glean from Polkit:
- One of Polkit's uses lies in the ability to determine whether the program should initiate privileged operations for a user who requested the action to take place.
- Polkit access and task permission tools can identify multiple active sessions and seats, the latter of which is described as an "untrusted user's reboot request."
"Each piece of this ecosystem exposes possible vulnerabilities through backend D-Bus implementation, the front end Polkit daemon, or even userland tools that use Polkit for privilege authorization." wrote Coty and Borland.
Despite these concerns, Coty informed Jackson Higgins that this vulnerability won't have to be patched until after the holiday season, and only inexperienced Linux users are likely to encounter serious problems.
Hi, welcome to RDX! The Federal Bureau of Investigation recently sent a five-page document to businesses, warning them of a particularly destructive type of malware. It is believed the program was the same one used to infiltrate Sony's databases.
The FBI report detailed the malware's capabilities. Apparently, the software overrides all information on computer hard drives, including the master boot record. This could prevent servers from accessing critical software, such as operating systems or enterprise applications.
Database data can be lost or corrupted for many reasons. Regardless if the data loss was due to a hardware failure, human error or the deliberate act of a cybercriminal, database backups ensure that critical data can be quickly restored. RDX's backup and recovery experts are able to design well-thought out strategies that help organizations protect their databases from any type of unfortunate event.
Thanks for watching!
Enterprises looking for Oracle experts knowledgeable of the software giant's latest database solutions may discover some DBAs' certifications are no longer valid.
In addition, companies using SAP applications would do well to hire DBAs who know how to optimize Oracle's server solutions. Many SAP programs leverage Oracle 12c databases as the underbelly of their functionality,so ensuring that SAP's software can use and secure data within these environments efficiently is a must.
Releasing new accreditation standards
TechTarget's Jessica Sirkin commented on Oracle's certification requirements, which now state that DBAs must take tests within one year in order to revamp their accreditations. The exams vet a person's familiarity with more recent versions of Oracle Database. Those with certifications in 7.3, 8, 8i and 9i must undergo tests to obtain accreditations in 10g, 11g or 12c to retain their active status within Oracle's CertView portal system.
These rules apply to any professional holding a Certified Associate, Professional, Expert or Master credential in the aforementioned solutions. Those already possessing accreditation in 12c or 11g will be able to retain their active statuses for the foreseeable future, but DBAs with 10g certifications will be removed from the CertView list on March 1 of next year.
As for the company's reasons, Oracle publicly stated that the measures are intended to "have qualified people implementing, maintaining and troubleshooting our software."
SAP gets closer to Oracle
Those who use Oracle's databases to power their SAP software are in luck. Earlier this year, SAP certified its solutions to coincide with the rollout of an updated version of Oracle 12c, specifically build 126.96.36.199. One of the reasons why SAP is supporting Oracle's flagship database is because the company wants to provide its customers with more flexible upgrade plans from 11g Release 2 to Oracle's latest release. SAP's supporting features will include the following:
- A multitenancy option for 12c, which will allow users to view one piece of information or use one particular function simultaneously.
- Hybrid columnar compression technology, which will certainly help those who are trying to engineer back-end databases to store more information.
Most importantly, the news source acknowledged the fact that many businesses use Oracle's database products in conjunction with SAP's enterprise software. Incompatibility between the two has been a persistent headache for IT departments working with these two solutions, but SAP's move will improve the matter.
Overall, hiring a team of database experts experienced in working with different software running on top of Oracle is a safe bet for organizations wary of this change.
The post Oracle’s revises its database administrator accreditation portfolio appeared first on Remote DBA Experts.
While Oracle's database engine and Microsoft's SQL Server are among the top three server solutions among enterprises, by no means is PostgreSQL being left in the dust.
This year, The PostgreSQL Global Development Group released PostgreSQL 9.4, which was equipped with several bug fixes as well as a few new capabilities, such as:
- An ALTER SYSTEM command can be used to change configuration file entries
- It's now possible to bring up materialized views to be refreshed without deflecting concurrent reads
- Logical decoding for WAL data was added
- Background worker processes can now be initiated, logged and terminated dynamically
PostgreSQL 9.4 is currently available for download on the developer's website. Users can access versions that are compatible with specific operating systems, including SUSE, Ubuntu, Solaris, Windows and Mac OS X.
Rising to fame?
ZDNet contributor Toby Wolpe spoke with EnterpriseDB chief architect Dave Page, who is also a member of PostgreSQL's core team, on the open source solution's popularity. He maintained that PostgreSQL's capabilities are catching up to Oracle's, an assertion that may not be shared by everyone but one worth acknowledging nonetheless.
Page referenced PostgreSQL as "one of those well kept secrets that people just haven't cottoned on to." one of the reasons why PostgreSQL has gained so much traction lately is due to MySQL's purchase by Sun, which was acquired by Oracle. According to Page, people are apprehensive regarding Oracle's control of MySQL, another relational database engine.
Throughout his interview with Wolpe, Page noted a general sentiment among DBAs who are switching from MySQL to PostgreSQL for the latter solution's "feature-rich" content. In a sense, it makes sense that PostgreSQL would have plenty of functions to offer DBAs due to its open source format. When users can customize aspects of PostgreSQL that not only complement their workflow but that of other DBAs as well, such capabilities tend to be integrated into the next release permanently.
One particular feature that has managed to stick in PostgreSQL is foreign data wrappers. Wolpe noted that data wrappers enable remote information to be categorized as a table within PostgreSQL, meaning queries can be run across both PostgreSQL tables and foreign data as if it were native.
Another tool provides support for JSONB data, allowing information to be stored within PostgreSQL in a binary format. The advantage of this function is that it initiates a new index operator that's quick-acting.
While PostgreSQL may not be the engine of choice for some DBAs, it is a solution worth acknowledging.
Effective disaster recovery plans admittedly take a lot of time, resources and attention to develop, which may cause some small and mid-sized businesses to stray away from the practice. While it's easy to think "it could never happen to me," that's certainly not a good mindset to possess.
While the sole proprietor of a small graphic design operation may want to set up a disaster recovery plan, he or she may not know where to start. It's possible that the application used to create designs resides in-house, but the system used to deliver content to customers may be hosted through the cloud. It's a confusing situation, especially if one doesn't have experience in IT.
SMEs need DR, but don't have robust plans
To understand how strong small and mid-sized enterprises' DR strategies are, Dimensional Research and Axcient conducted a survey of 453 IT professionals working at companies possessing between 50 and 1000 workers. The study found that 71 percent of respondents back up both information and software, but only 24 percent back up all their data and applications. Other notable discoveries are listed below:
- A mere 7 percent of survey participants felt "very confident" that they could reboot operations within two hours of an incident occurring.
- More than half (53 percent) of respondents asserted company revenues would be lost until critical systems could be rebooted.
- Exactly 61 percent of SMBs use backup and recovery tools that perform the same functions.
- Almost three-fourths maintain that using multiple DR assets can increase the risk of endeavors failing.
- Eighty-nine percent surveyed view cloud-based DR strategies as incredibly desirable. The same percentage acknowledged that business workers are much less productive during outages.
What measures can SMBs take?
For many IT departments at SMEs, taking advantage of cloud-based DR plans can be incredibly advantageous. IT Business Edge's Kim Mays noted that decision-makers should pay close attention to the information and applications employees access most often to perform day-to-day tasks. Allowing these IT assets to transition to cloud infrastructures in the event of a disaster will allow workers to continue with their responsibilities.
Of course, using a cloud-based strategy isn't the be-all, end-all to a solid DR blueprint. For instance, it's possible that personnel residing in affected areas may not have Internet access. This is where a business' senior management comes into play: Set guidelines that will allow staff to make decisions that will benefit the company during an outage.
The post Yes, SMBs should pay attention to disaster recovery appeared first on Remote DBA Experts.
Enterprises using Linux operating systems to run servers or desktops may want to consider hiring specialists to prevent actions initiated by the "less" command.
In addition, Linux users should also be aware that they have been targeted by a dangerous cyberespionage operation that is believed to be headquartered in Russia. If these two threats go unacknowledged, enterprises that use Linux may sustain grievous data breaches.
A bug in the "less" command
The vulnerability concerning less was detailed by Lucian Constantin, a contributor to Computerworld. Constantin noted that less presents itself as a "harmless" instruction that enables users to view the contents of files downloaded from the Web. However, using the less directive could also allow perpetrators to execute code remotely.
Less is typically used to view information without having to load files into a computer's memory, a huge help for those simply browsing documents on the Internet. However, lesspipe is a script that automatically accesses third-party tools to process files with miscellaneous extensions such as .pdf, .gz, .xpi, and so on.
One such tool, cpio file archiving, could enable a cybercriminal to initiate an arbitrary code execution exploit. Essentially, this would give him or her control over a machine, enabling them to manipulate it at will. This particularly bug was discovered by Michal Zalewski, a Google security engineer.
"While it's a single bug in cpio, I have no doubt that many of the other lesspipe programs are equally problematic or worse," said Zalewski, as quoted by Constantin.
Taking aim and firing
The less command isn't the only thing Linux users should be concerned with. In a separate piece for PCWorld, Constantin noted that Russian cyberespionage group Epic Turla has directed its attention toward infiltrating machines running Linux.
Kaspersky Lab asserted Epic Turla is taking advantage of cd00r, an open-source backdoor program that was created in 2000. This particular tool enables users to initiate arbitrary directives, as well as "listen" to commands received via a transmission control protocol, or user datagram protocol – the perfect function that makes it a dangerous espionage asset.
"It can't be discovered via netstat, a commonly used administrative tool," said Kaspersky researchers, as quoted by Constantin. "We suspect that this component was running for years at a victim site, but do not have concrete data to support that statement just yet."
If Linux users want to secure their systems, consulting with specialists certified in the OS may not be a bad idea.
The post Linux users may need experts to reinforce malware detection functions appeared first on Remote DBA Experts.
Volume and velocity are two words analysts are associating with health care data, motivating CIOs to assess the scalability and security of their current database infrastructures.
Protecting the sensitive information contained within electronic health records has always been a concern, but the greatest issue at hand is that some health care providers don't have the personnel, assets or time required to effective manage and defend their databases. These concerns may incite mass adoption of outsourced database administration services.
Greater volume at a faster rate
CIO.com's Kenneth Corbin referenced a report conducted by EMC and research firm IDC, which discovered that the amount of health information is expected to increase 48 percent on an annual basis for the foreseeable future. In 2013, 153 exabytes of health care data existed. By 2020, that figure is anticipated to expand to 2,314 exabytes.
EMC and IDC analysts proposed a scenario in which all of that information was stored on a stack of tablets. Referencing the 2020 statistic, they asserted that stack would be more than 82,000 miles high, reaching a third of the way to the moon. DC Health Insights Research Vice President Lynne Dunbrack maintained that health care companies can prepare for this explosion of data by identifying who owns the information and classifying it.
"Understanding what the data means is key to making data governance and interoperability work, and is essential for analytics, big data initiatives and quality reporting initiatives, among other things," wrote Dunbrack in an email, as quoted by Corbin.
More data means greater security concerns
As hospitals, insurance providers, clinics and other such organizations implement EHR software and increase their data storage capacities, it can be imagined that hackers will place the health care industry at the top of their list of targets. Health care records contain a plethora of valuable data, from Social Security numbers to checking account information.
Health IT Security cited the problems Aventura Hospital and Medical Center in South Florida have encountered. Over the past two years, the institution has sustained three data breaches, one of which was caused by a vendor's employee who stole information on an estimated 82,000 patients. Worst of all, the worker was an employee of Valesco Ventures, Aventura's Health Insurance Portability and Accountability Act business associate.
With this particular instance in mind, finding a database administration service with trustworthy employees is essential. In addition, contracting a company that can provide remote database monitoring 24/7/365 is a must – there can be no compromises.
The post Data management challenges, concerns for health care companies appeared first on Remote DBA Experts.
Hi, welcome to RDX! Using SUSE Linux Enterprise Server to manage your workstations, servers and mainframes? SUSE recently released a few updates to the solution, dubbed Linux Enterprise Server 12, that professionals should take note of.
For one thing, SUSE addressed the problem with Unix’s GNU Bourne Again Shell, also known as the “Shellshock” bug. This is a key fix, as it disallows hackers from placing malicious code onto servers through remote computers.
As far as disaster recovery capabilities are concerned, Linux Enterprise Server 12 is equipped with snapshot and full-system rollback features. These two functions enable users to revert back to the original configuration of a system if it happens to fail.
Want a team of professionals that can help you capitalize on these updates? Look no further than RDX’s Linux team – thanks for watching!
Although support for Windows Server 2003 doesn't end until July of next year, enterprises that have used the operating system since its inception are transitioning to the solution's latest iteration, Windows Server 2012 R2.
Before diving into the implications of transitioning from Server 2003 to Server 2012 R2, it's important to answer a valid question: Why not simply make the switch to Windows Server 2008 R2?
It's a conundrum that Windows IT Pro contributor Orin Thomas has ruminated on since the announcement of Microsoft's discontinuation of Server 2003. While he acknowledged various reasons why some professionals are hesitant to make the leap from Server 2003 to Server 2012 R2 (such as application compatibility issues and the "Windows 8-style interface") he pointed to a key concern: time.
Basically, Server 2008 R2 will cease to receive updates and support on Jan. 14, 2020. Comparatively, Server 2012 R2's end of life is slated for Jan. 10 2023.
In the event organizations have difficulty making the transition, there's always the option of seeking assistance from experts with certifications in Server 2012 R2. On top of migration and integration, these professionals can provide continued support throughout the duration of the solution's usage.
As companies using Windows Server 2003 will be moving to either Server 2008 R2 or Server 2012 R2, a number of implications must be taken into account. ZDNet contributor Ken Hess outlined several recommendations for those preparing for the migration:
- Identify how many Server 2003 systems you have in place.
- Aggregate and organize the hardware specifications for each system (CPU, memory, disk space, etc.).
- Assess how heavily these solutions were utilized over the years, then correlate them with projected growth and future workloads.
- Do away with systems that are no longer applicable to operations.
- Determine which applications running on top of Server 2003 are critical to the business model.
- Deduce how virtual machines can be leveraged to host underutilized processes.
- Collaborate with a database administration firm to outline and implement a migration plan (provide the partner with the data mentioned above).
These are just a few starting points on which to base a comprehensive migration plan. Also, it's important to be aware of unexpected spikes in server utilization. Although upsurges of 100 percent may occur infrequently, it's important that systems will be able to handle them effectively. As always, be sure to troubleshoot the renewed solution after implementation.
SQL injections have been named as the culprits of many database security woes, including the infamous Target breach that occurred at the commencement of last year's holiday season.
Content management system compromised
One particular solution was recently flagged as vulnerable to such hacking techniques. Chris Duckett, a contributor to ZDNet, referenced a public service announcement released by Drupal, a open source content management solution used to power millions of websites and applications.
The developer noted that, unless users patched their sites against SQL injection attacks before October 15, "you should proceed under the assumption that every Drupal 7 website was compromised." Drupal expanded by asserting that updating to 7.32 will patch the vulnerability, but websites that have already been exposed are still compromised – the reason being that hackers have already obtained back-end information.
There is one way in which websites that sustained attacks could have remained protected. Database monitoring, regardless of the system being used, can alert administrators of problems as they arise, giving them ample time to respond to breaches.
Why database monitoring works
Although access permissions, malware and other assets are designed to dismantle and eradicate intrusions, some of their detection features leave something to be desired. Therefore, in order for programs capable of deterring SQL injections to operate to the best of their ability, they must be programmed to work in conjunction with surveillance tools that assess all database actions constantly.
The Ponemon Institute polled 595 database experts on the matter, asking them about the effectiveness of server monitoring tools. While Chairman Larry Ponemon acknowledged the importance of using continuous monitoring to look for anomalous behavior, Secure Ideas CEO Kevin Johnson said some tools can miscalculate SQL injections because they're designed to appear legitimate. Therefore, it's important for surveillance programs to also be directed toward identifying vulnerabilities. Paul Henry, senior instructor at the SANS Institute, also weighed in on the matter.
"I believe in a layered approach that perhaps should include a database firewall to mitigate the risk of SQL injection, combined with continuous monitoring of the database along with continuous monitoring of normalized network traffic flows," said Henry, as quoted by the source.
At the end of the day, having a team of professionals on standby to address SQL injections if and when they occur is the only way to guarantee that massive consequences don't exacerbate as a result of these attacks.
The post Database active monitoring a strong defense against SQL injections appeared first on Remote DBA Experts.
Database administrators, since the inception of their job descriptions, have been responsible for the protection of their organization’s most sensitive database assets. They are tasked with ensuring that key data stores are safeguarded against any type of unauthorized data access.
Since I’ve been a database tech for 25 years now, this series of articles will focus on the database system and some of the actions we can take to secure database data. We won’t be spending time on the multitude of perimeter protections that security teams are required to focus on. Once those mechanisms are breached, the last line of defense for the database environments will be the protections the database administrator has put in place.
You will notice that I will often refer to the McAfee database security protection product set when I describe some of the activities that will need to be performed to protect your environments. If you are truly serious about protecting your database data, you’ll quickly find that partnering with a security vendor is an absolute requirement and not “something nice to have.”
I could go into an in-depth discussion on RDX’s vendor evaluation criteria, but the focus of this series of articles will be on database protection, not product selection. After an extensive database security product analysis, we felt that the breadth and depth of McAfee’s database security offering provided RDX with the most complete solution available.
This is serious business, and you are up against some extremely proficient opponents. To put it lightly, “they are one scary bunch.” Hackers can be classified as intelligent, inquisitive, patient, thorough, driven and more often than not, successful. This combination of traits makes database data protection a formidable challenge. If they target your systems, you will need every tool at your disposal to prevent their unwarranted intrusions.
Upcoming articles will focus on the following key processes involved in the protection of sensitive database data stores:
Evaluating the Most Common Threats and Vulnerabilities
In the first article of this series, I’ll provide a high level overview of the most common threat vectors. Some of the threats we will be discussing will include unpatched database software vulnerabilities, unsecured database backups, SQL Injection, data leaks and a lack of segregation of duties. The spectrum of tactics used by hackers could result in an entire series of articles dedicated to database threats. The scope of these articles is on database protection activities and not a detailed threat vector analysis.
Identifying Sensitive Data Stored in Your Environment
You can’t protect what you don’t know about. The larger your environment, the more susceptible you will be to data being stored that hasn’t been identified as being sensitive to your organization. In this article, I’ll focus on how RDX uses McAfee’s vulnerability scanning software to identify databases that contain sensitive data such as credit card or Social Security numbers stored in clear text. The remainder of the article will focus on identifying other objects that may contain sensitive, and unprotected data, such as test systems cloned from production, database backups, load input files, report output, etc…
Initial and Ongoing Vulnerability Analysis
Determining how the databases are currently configured from a security perspective is the next step to be performed. Their release and patch levels will be identified and compared to vendor security patch distributions. An analysis of how closely support teams adhere to industry and internal security best practices is evaluated at this stage. The types of vulnerabilities will range the spectrum, from weak and default passwords to unpatched (and often well known) database software weaknesses.
Ranking the vulnerabilities allows the highest priority issues to be addressed more quickly than their less important counterparts. After the vulnerabilities are addressed, the configuration is used as a template for future database implementations. Subsequent scans, run on a scheduled basis, will ensure that no new security vulnerabilities are introduced into the environment.
Database Data Breach Monitoring
Most traditional database auditing mechanisms are designed to report data access activities after they have occurred. There is no alerting mechanism. Auditing is activated, the data is collected and reports are generated that allow the various activities performed in the database to be analyzed for the collected time period.
Identifying a data breach after the fact is not database protection. It is database reporting. To protect databases we are tasked with safeguarding, we need a solution that has the ability to alert or alert and stop the unwarranted data accesses from occurring.
RDX found that McAfee’s Database Activity Monitoring product provides the real time protection we were looking for. McAfee’s product has the ability to identify, terminate and quarantine a user that violates a predefined set of database security policies.
To be effective, database breach protection must be configured as a stand-alone, and separated, architecture. Otherwise, internal support personnel could deactivate the breach protection service by mistake or deliberate intention. This separation of duties is an absolute requirement for most industry compliance regulations such as HIPAA, PCI DSS and SOX. The database must be protected from both internal and external threat vectors.
In an upcoming article of this series, we’ll learn more about real-time database activity monitoring and the benefits it provides to organizations that require a very high level of protection for their database data stores.
Ongoing Database Security Strategies
Once the database vulnerabilities have been identified and addressed, the challenge is to ensure that the internal support team’s future administrative activities do not introduce any additional security vulnerabilities into the environment.
In this article, I’ll prove recommendations on a set of robust, documented security controls and best practices that will assist you in your quest to safeguard your database data stores.
A documented plan to quickly address new database software vulnerabilities is essential to their protection. The hacker’s “golden window of zero day opportunity” exists from when the software’s weakness is identified until the security patch that addresses it is applied.
Separation of duties must also be considered. Are the same support teams that are responsible for your vulnerability scans, auditing and administering your database breach protection systems also accessing your sensitive database data stores?
Reliable controls that include support role separation and the generation of audit records that ensure proper segregation of duties so that even privileged users cannot bypass security will need to be implemented.
Significant data breach announcements are publicized on a seemingly daily basis. External hackers and rogue employees continuously search for new ways to steal sensitive information. There is one component that is common to many thefts – the database data store. You need a plan to safeguard them. If not, your organization may be the next one that is highlighted on the evening news.
Hi, welcome to RDX! Amid constant news of data breaches, ever wonder what's causing all of them? IBM and Ponemon's Global Breach Analysis can give you a rundown.
While some could blame employee mishaps or poor security, hacking is the number one cause of many data breaches, most of which are massive in scale. For example, when Adobe was hacked, approximately 152 million records were compromised.
As you can imagine, databases were prime targets. When eBay lost 145 million records to perpetrators earlier this year, hackers used the login credentials of just a few employees and then targeted databases holding user information.
To prevent such trespasses from occurring, organizations should employ active database monitoring solutions that scrutinize login credentials to ensure the appropriate personnel gain entry.
Thanks for watching! Visit us next time for more news and tips about database protection!
The post Visualization shows hackers behind majority of data breaches appeared first on Remote DBA Experts.
Hi, welcome to RDX! Selecting a data warehouse appliance is a very important decision to make. The amount of data that companies store is continuously increasing, and DBAs now have many data storage technologies available to them. Uninformed decisions may cause a number of problems including limited functionality, poor performance, lack of scalability, and complex administration.
Oracle, Microsoft, and IBM understand the common data warehousing challenges DBAs face and offer data warehouse appliances that help simplify administration and help DBAs effectively manage large amounts of data.
Need help determining which data warehouse technology is best for your business? Be sure to check out RDX VP of Technology, Chris Foot’s, recent blog post, Data Warehouse Appliance Offerings, where he provides more details about each vendor’s architecture and the benefits of each.
Thanks for watching. See you next time!