Hi, welcome to RDX! Using SUSE Linux Enterprise Server to manage your workstations, servers and mainframes? SUSE recently released a few updates to the solution, dubbed Linux Enterprise Server 12, that professionals should take note of.
For one thing, SUSE addressed the problem with Unix’s GNU Bourne Again Shell, also known as the “Shellshock” bug. This is a key fix, as it disallows hackers from placing malicious code onto servers through remote computers.
As far as disaster recovery capabilities are concerned, Linux Enterprise Server 12 is equipped with snapshot and full-system rollback features. These two functions enable users to revert back to the original configuration of a system if it happens to fail.
Want a team of professionals that can help you capitalize on these updates? Look no further than RDX’s Linux team – thanks for watching!
Although support for Windows Server 2003 doesn't end until July of next year, enterprises that have used the operating system since its inception are transitioning to the solution's latest iteration, Windows Server 2012 R2.
Before diving into the implications of transitioning from Server 2003 to Server 2012 R2, it's important to answer a valid question: Why not simply make the switch to Windows Server 2008 R2?
It's a conundrum that Windows IT Pro contributor Orin Thomas has ruminated on since the announcement of Microsoft's discontinuation of Server 2003. While he acknowledged various reasons why some professionals are hesitant to make the leap from Server 2003 to Server 2012 R2 (such as application compatibility issues and the "Windows 8-style interface") he pointed to a key concern: time.
Basically, Server 2008 R2 will cease to receive updates and support on Jan. 14, 2020. Comparatively, Server 2012 R2's end of life is slated for Jan. 10 2023.
In the event organizations have difficulty making the transition, there's always the option of seeking assistance from experts with certifications in Server 2012 R2. On top of migration and integration, these professionals can provide continued support throughout the duration of the solution's usage.
As companies using Windows Server 2003 will be moving to either Server 2008 R2 or Server 2012 R2, a number of implications must be taken into account. ZDNet contributor Ken Hess outlined several recommendations for those preparing for the migration:
- Identify how many Server 2003 systems you have in place.
- Aggregate and organize the hardware specifications for each system (CPU, memory, disk space, etc.).
- Assess how heavily these solutions were utilized over the years, then correlate them with projected growth and future workloads.
- Do away with systems that are no longer applicable to operations.
- Determine which applications running on top of Server 2003 are critical to the business model.
- Deduce how virtual machines can be leveraged to host underutilized processes.
- Collaborate with a database administration firm to outline and implement a migration plan (provide the partner with the data mentioned above).
These are just a few starting points on which to base a comprehensive migration plan. Also, it's important to be aware of unexpected spikes in server utilization. Although upsurges of 100 percent may occur infrequently, it's important that systems will be able to handle them effectively. As always, be sure to troubleshoot the renewed solution after implementation.
SQL injections have been named as the culprits of many database security woes, including the infamous Target breach that occurred at the commencement of last year's holiday season.
Content management system compromised
One particular solution was recently flagged as vulnerable to such hacking techniques. Chris Duckett, a contributor to ZDNet, referenced a public service announcement released by Drupal, a open source content management solution used to power millions of websites and applications.
The developer noted that, unless users patched their sites against SQL injection attacks before October 15, "you should proceed under the assumption that every Drupal 7 website was compromised." Drupal expanded by asserting that updating to 7.32 will patch the vulnerability, but websites that have already been exposed are still compromised – the reason being that hackers have already obtained back-end information.
There is one way in which websites that sustained attacks could have remained protected. Database monitoring, regardless of the system being used, can alert administrators of problems as they arise, giving them ample time to respond to breaches.
Why database monitoring works
Although access permissions, malware and other assets are designed to dismantle and eradicate intrusions, some of their detection features leave something to be desired. Therefore, in order for programs capable of deterring SQL injections to operate to the best of their ability, they must be programmed to work in conjunction with surveillance tools that assess all database actions constantly.
The Ponemon Institute polled 595 database experts on the matter, asking them about the effectiveness of server monitoring tools. While Chairman Larry Ponemon acknowledged the importance of using continuous monitoring to look for anomalous behavior, Secure Ideas CEO Kevin Johnson said some tools can miscalculate SQL injections because they're designed to appear legitimate. Therefore, it's important for surveillance programs to also be directed toward identifying vulnerabilities. Paul Henry, senior instructor at the SANS Institute, also weighed in on the matter.
"I believe in a layered approach that perhaps should include a database firewall to mitigate the risk of SQL injection, combined with continuous monitoring of the database along with continuous monitoring of normalized network traffic flows," said Henry, as quoted by the source.
At the end of the day, having a team of professionals on standby to address SQL injections if and when they occur is the only way to guarantee that massive consequences don't exacerbate as a result of these attacks.
The post Database active monitoring a strong defense against SQL injections appeared first on Remote DBA Experts.
Database administrators, since the inception of their job descriptions, have been responsible for the protection of their organization’s most sensitive database assets. They are tasked with ensuring that key data stores are safeguarded against any type of unauthorized data access.
Since I’ve been a database tech for 25 years now, this series of articles will focus on the database system and some of the actions we can take to secure database data. We won’t be spending time on the multitude of perimeter protections that security teams are required to focus on. Once those mechanisms are breached, the last line of defense for the database environments will be the protections the database administrator has put in place.
You will notice that I will often refer to the McAfee database security protection product set when I describe some of the activities that will need to be performed to protect your environments. If you are truly serious about protecting your database data, you’ll quickly find that partnering with a security vendor is an absolute requirement and not “something nice to have.”
I could go into an in-depth discussion on RDX’s vendor evaluation criteria, but the focus of this series of articles will be on database protection, not product selection. After an extensive database security product analysis, we felt that the breadth and depth of McAfee’s database security offering provided RDX with the most complete solution available.
This is serious business, and you are up against some extremely proficient opponents. To put it lightly, “they are one scary bunch.” Hackers can be classified as intelligent, inquisitive, patient, thorough, driven and more often than not, successful. This combination of traits makes database data protection a formidable challenge. If they target your systems, you will need every tool at your disposal to prevent their unwarranted intrusions.
Upcoming articles will focus on the following key processes involved in the protection of sensitive database data stores:
Evaluating the Most Common Threats
In the first article of this series, I’ll provide a high level overview of the most common threat vectors. Some of the threats we will be discussing will include unpatched database software vulnerabilities, unsecured database backups, SQL Injection, data leaks and a lack of segregation of duties. The spectrum of tactics used by hackers could result in an entire series of articles dedicated to database threats. The scope of these articles is on database protection activities and not a detailed threat vector analysis.
Identifying Sensitive Data Stored in Your Environment
You can’t protect what you don’t know about. The larger your environment, the more susceptible you will be to data being stored that hasn’t been identified as being sensitive to your organization. In this article, I’ll focus on how RDX uses McAfee’s vulnerability scanning software to identify databases that contain sensitive data such as credit card or Social Security numbers stored in clear text. The remainder of the article will focus on identifying other objects that may contain sensitive, and unprotected data, such as test systems cloned from production, database backups, load input files, report output, etc…
Initial and Ongoing Vulnerability Analysis
Determining how the databases are currently configured from a security perspective is the next step to be performed. Their release and patch levels will be identified and compared to vendor security patch distributions. An analysis of how closely support teams adhere to industry and internal security best practices is evaluated at this stage. The types of vulnerabilities will range the spectrum, from weak and default passwords to unpatched (and often well known) database software weaknesses.
Ranking the vulnerabilities allows the highest priority issues to be addressed more quickly than their less important counterparts. After the vulnerabilities are addressed, the configuration is used as a template for future database implementations. Subsequent scans, run on a scheduled basis, will ensure that no new security vulnerabilities are introduced into the environment.
Database Data Breach Monitoring
Most traditional database auditing mechanisms are designed to report data access activities after they have occurred. There is no alerting mechanism. Auditing is activated, the data is collected and reports are generated that allow the various activities performed in the database to be analyzed for the collected time period.
Identifying a data breach after the fact is not database protection. It is database reporting. To protect databases we are tasked with safeguarding, we need a solution that has the ability to alert or alert and stop the unwarranted data accesses from occurring.
RDX found that McAfee’s Database Activity Monitoring product provides the real time protection we were looking for. McAfee’s product has the ability to identify, terminate and quarantine a user that violates a predefined set of database security policies.
To be effective, database breach protection must be configured as a stand-alone, and separated, architecture. Otherwise, internal support personnel could deactivate the breach protection service by mistake or deliberate intention. This separation of duties is an absolute requirement for most industry compliance regulations such as HIPAA, PCI DSS and SOX. The database must be protected from both internal and external threat vectors.
In an upcoming article of this series, we’ll learn more about real-time database activity monitoring and the benefits it provides to organizations that require a very high level of protection for their database data stores.
Ongoing Database Security Strategies
Once the database vulnerabilities have been identified and addressed, the challenge is to ensure that the internal support team’s future administrative activities do not introduce any additional security vulnerabilities into the environment.
In this article, I’ll prove recommendations on a set of robust, documented security controls and best practices that will assist you in your quest to safeguard your database data stores.
A documented plan to quickly address new database software vulnerabilities is essential to their protection. The hacker’s “golden window of zero day opportunity” exists from when the software’s weakness is identified until the security patch that addresses it is applied.
Separation of duties must also be considered. Are the same support teams that are responsible for your vulnerability scans, auditing and administering your database breach protection systems also accessing your sensitive database data stores?
Reliable controls that include support role separation and the generation of audit records that ensure proper segregation of duties so that even privileged users cannot bypass security will need to be implemented.
Significant data breach announcements are publicized on a seemingly daily basis. External hackers and rogue employees continuously search for new ways to steal sensitive information. There is one component that is common to many thefts – the database data store. You need a plan to safeguard them. If not, your organization may be the next one that is highlighted on the evening news.
Hi, welcome to RDX! Amid constant news of data breaches, ever wonder what's causing all of them? IBM and Ponemon's Global Breach Analysis can give you a rundown.
While some could blame employee mishaps or poor security, hacking is the number one cause of many data breaches, most of which are massive in scale. For example, when Adobe was hacked, approximately 152 million records were compromised.
As you can imagine, databases were prime targets. When eBay lost 145 million records to perpetrators earlier this year, hackers used the login credentials of just a few employees and then targeted databases holding user information.
To prevent such trespasses from occurring, organizations should employ active database monitoring solutions that scrutinize login credentials to ensure the appropriate personnel gain entry.
Thanks for watching! Visit us next time for more news and tips about database protection!
The post Visualization shows hackers behind majority of data breaches appeared first on Remote DBA Experts.
Hi, welcome to RDX! Selecting a data warehouse appliance is a very important decision to make. The amount of data that companies store is continuously increasing, and DBAs now have many data storage technologies available to them. Uninformed decisions may cause a number of problems including limited functionality, poor performance, lack of scalability, and complex administration.
Oracle, Microsoft, and IBM understand the common data warehousing challenges DBAs face and offer data warehouse appliances that help simplify administration and help DBAs effectively manage large amounts of data.
Need help determining which data warehouse technology is best for your business? Be sure to check out RDX VP of Technology, Chris Foot’s, recent blog post, Data Warehouse Appliance Offerings, where he provides more details about each vendor’s architecture and the benefits of each.
Thanks for watching. See you next time!
Between natural disasters, cybercrime and basic human error, organizations are looking for tools that support disaster recovery endeavors, as well as the professionals capable of using them.
A fair number of administrators often use Oracle's advanced recovery features to help them ensure business continuity for database-driven applications. The database vendor recently unveiled a couple of new offerings that tightly integrate with their Oracle Database architecture.
Restoration and recovery
IT-Online acknowledged Oracle's Zero Data Loss Recovery Appliance, which is the first of its kind in regard to its ability to ensure critical Oracle databases retain their information even if the worst should occur. The source maintained that Oracle's new architecture can protect thousands of databases using a cloud-based, centralized recovery appliance as the target.
In other words, the Recovery Appliance isn't simply built to treat databases as information repositories that need to be backed up every so often. The appliance's architecture replicates changes in real time to ensure that the recovery databases are constantly in sync with their production counterparts. Listed below are several features that make the architecture stand out among conventional recovery offerings:
- Live "Redo" data is continuously transported from the databases to the cloud-based appliance protecting the most recent transactions so that servers don't sustain data loss in the event of a catastrophic failure
- To reduce the impact on the production environment, the Recovery Appliance architecture only delivers data that has been changed, which reduces server loads and network impact
- The appliance's automatic archiving feature allows backups to be automatically stored on low cost tape storage
- Data stored on the appliance can be used to recreate a historical version of the database
Simplifying database management and availability
The second application that Oracle hopes will to enhance entire database infrastructures Oracle Database Appliance Management. The appliance manager application allows administrators to create rapid snapshots of both database and virtual machines, enabling them to quickly create and allocate development and test environments.
"With this update of Oracle Database Appliance software, customers can now reap the benefits of Oracle Database 12c, the latest release of the world's most popular database right out of the box," said Oracle Vice President of Product Strategy and Business Development Sohan DeMel. "We added support for rapid and space-efficient snapshots for creating test and development environments, organizations can further capitalize on the simplicity of Oracle engineered systems with speed and efficiency."
The post Oracle assists professionals looking for recovery programs appeared first on Remote DBA Experts.
While providing database security services to cloud storage providers is possible, many such companies aren't taking the necessary precautions to ensure customer data remains protected.
According to 9 to 5 Mac, Dropbox recently announced that a database holding 7 million logins for its users was infiltrated. The environment was operated by a third party, which was hired by Dropbox to store its customer data. To the company's relief, many of the stolen passwords were outdated.
While Dropbox is taking steps to mitigate the situation, the enterprise advised its customers to change their login information as an extra precaution.
The best way to prevent breaches from occurring is to install automated intrusion detection software. In addition, regularly auditing existing systems for vulnerabilities is considered a best practice.
Thanks for watching!
Hi, welcome to RDX. When searching for a database administration service, it's important to look for a company that prioritizes performance, security and availability.
How does RDX deliver such a service? First, we assess all vulnerabilities and drawbacks that are preventing your environments from operating efficiently. Second, we make any applicable changes that will ensure your business software is running optimally. From there, we regularly conduct quality assurance audits to prevent any performance discrepancies from arising.
In addition, we offer 24/7 support for every day of the year. We recognize that systems need to remain online on a continuous basis, and we're committed to making sure they remain accessible.
Thanks for watching!
Information Technology units will continue to be challenged by the unbridled growth of their organization’s data stores. An ever-increasing amount of data needs to be extracted, cleansed, analyzed and presented to the end user community. Data volumes that were unheard of a year ago are now commonplace. Day-to-day operational systems are now storing such large amounts of data that they rival data warehouses in disk storage and administrative complexity. New trends, products, and strategies, guaranteed by vendors and industry pundits to solve large data store challenges, are unveiled on a seemingly endless basis.
Choosing the Large Data Store Ecosystem
Choosing the correct large data store ecosystem (server, storage architecture, OS, database) is critical to the success of any application that is required to store and process large volumes of data. This decision was simple when the number of alternatives available was limited. With the seemingly endless array of architectures available, that choice is no longer as clear cut. Database administrators now have more choices available to them than ever before. In order to correctly design and implement the most appropriate architecture for their organization, DBAs must evaluate and compare large data store ecosystems and not the individual products.
Traditional Large Data Store Technologies
Before we begin our discussion on the various advanced vendor offerings, we need to review the database features that are the foundation of the customized architectures we will be discussing later in this article. It is important to note that although each vendor offering certainly leverages the latest technologies available, the traditional data storage and processing features that DBAs have been utilizing for years remain critical components of the newer architectures.
Partitioning data into smaller disk storage subsets allows the data to be viewed as a single entity while overcoming many of the challenges associated with the management of large data objects stored on disk.
Major database vendor products offer optimizers that are partition aware and will create query plans that access only those partitioned objects needed to satisfy the query’s data request (partition pruning). This feature allows administrators to create large data stores and still provide fast access to the data.
Partitioning allows applications to take advantage of “rolling window” data operations. Rolling windows allow administrators to roll off what is no longer needed. For example, a DBA may roll off the data in the data store containing last July’s data as they add this year’s data for July. If the data is ever needed again, administrators are able to pull the data from auxiliary or offline storage devices and plug the data back into the database.
Query parallelism improves data access performance by splitting work among multiple CPUs. Most database optimizers are also parallel aware and are able to break up a single query into sub queries that access the data simultaneously.
Without parallelism, a SQL statement’s work is performed by a single process. Parallel processing allows multiple processes to work together to simultaneously process a single SQL statement or utility execution. By dividing the work necessary to process a statement among multiple CPUs, the database can execute the statement more quickly than if the work was single-threaded.
The parallel query option can dramatically improve performance for data-intensive operations associated with decision support applications or very large database environments. Symmetric multiprocessing (SMP), clustered, or massively parallel systems gain the largest performance benefits from the parallel query option because query processing can be effectively split up among many CPUs on a single system.
Advanced Hardware and Software Technologies
Let’s continue our discussion by taking a high-level look at the advanced data warehouse offerings from the three major database competitors, Oracle, Microsoft and IBM. Each of the vendors’ offerings are proprietary data warehouse ecosystems, often called appliances, that consist of hardware, OS and database components. We’ll complete our review by learning more about Hadoop’s features and benefits.
Oracle’s Exadata Machine combines their Oracle database with intelligent data storage servers to deliver very high performance benchmarks for large data store applications. Exadata is a purpose-built warehouse ecosystem consisting of hardware, operating system and database components.
Oracle Exadata Storage Servers leverage high speed interconnects, data compression and intelligent filtering and caching features to increase data transfer performance between the database server and intelligent storage servers. In addition, the Exadata architecture is able to offload data intensive SQL statements to the storage servers to filter the results before the relevant data is returned to the database server for final processing.
Exadata uses PCI flash technology rather than flash disks. Oracle places the flash memory directly on the high speed PCI bus rather than behind slow disk controllers and directors. Each Exadata Storage Server includes 4 PCI flash cards that combine for a total of 3.2 TB of flash memory. Although the PCI flash can be utilized as traditional flash disk storage, it provides better performance when it is configured as a flash cache that sits in front of the disks. Exadata’s Smart Flash Cache will automatically cache frequently accessed data in the PCI cache, much like its traditional database buffer cache counterpart. Less popular data will continue to remain on disk. Data being sent to the PCI Flash cache is also compressed to increase storage capacity.
Exadata also offers an advanced compression feature called Hybrid Columnar Compression (HCC) to reduce storage requirements for large databases. Exadata offloads the compression/decompression workload to the processors contained in the Exadata storage servers.
These technologies enable Exadata to deliver high performance for large data stores accessed by both decision support and online operational systems. The Exadata machine runs an Oracle database which allows Oracle-based applications to be easily migrated. Oracle describes the Exadata architecture as “scale out” capable meaning multiple Exadata servers can be lashed together to increase computing and data access horsepower . Oracle RAC, as well as Oracle’s Automatic Storage Management (ASM), can be leveraged to dynamically add more processing power and disk storage.
Microsoft SQL Server PDW
SQL Server Parallel Data Warehouse (PDW) is a massively parallel processing (MPP) data warehousing appliance designed to support very large data stores. Like Oracle’s Exadata implementation, the PDW appliance’s components consist of the entire database ecosystem including hardware, operating system and database.
Database MPP architectures use a “shared-nothing” architecture, where there are multiple physical servers (nodes), with each node running an instance of the database and having its own dedicated CPU, memory and storage.
Microsoft PDW’s architecture consists of:
- The MPP Engine
- Responsible for generating parallel query execution plans and coordinating the workloads across the system’s compute nodes
- Uses a SQL Server database to store metadata and configuration data for all of the databases in the architecture
- In essence, it acts as the traffic cop and the “brain” of the PDW system
- Computer Nodes
- Each compute node also runs an instance of the SQL Server database
- The compute nodes’ databases are responsible for managing the user data
As T-SQL is executed in the PDW system, the queries are broken up to run simultaneously over multiple physical nodes, which utilizes parallel execution to provide high performance data access. The key to the success when using PDW is to select the appropriate distribution columns that are used to intelligently distribute the data amongst the nodes. The ideal distribution column is one that is accessed frequently, is able to evenly distribute data based on the column’s values and has low volatility (doesn’t change a lot).
Microsoft Analytics Platform (APS)- Hadoop and SQL Server Integration
Microsoft’s Analytics Platform System (APS) combines massively parallel processing offering (PDW) with HDInsight, their version of Apache Hadoop. Microsoft has partnered with Hortonworks, a commercial Hadoop software vendor that provides a Windows-based, 100% Apache Hadoop distribution. Please see section below for more detailed information on the Hadoop engine.
Integrating a Hadoop engine into SQL Server allows Microsoft to capture, store, process and present both structured (relational) and unstructured (non-relational) data within the same logical framework. Organizations wanting to process unstructured data often turned to Apache Hadoop environments which required them to learn new data storage technologies, languages and an entirely new processing architecture.
Microsoft’s Polybase provides APS users with the ability to query both structured and non-structured data with a single T-SQL based query. APS application programmers are not required to learn MapReduce or HiveQL to access data stored in the APS platform. Organizations using APS do not incur the additional costs associated with to re-training their existing staff or hiring personnel with experience in Hadoop access methods.
IBM PureData Systems for Analytics
Not to be outdone by their database rivals, IBM also provides a proprietary appliance called IBM PureData System for Analytics. The system, powered by Netezza, once again, combines the hardware, database and storage into a single platform offering. Pure Data Analytics is an MPP system utilizing IBM Blade Servers and dedicated disk storage servers that, like its competitors, is able to intelligently distribute workloads amongst the processing nodes.
IBM leverages field-programmable gate arrays (FPGAs) which are used in their FAST engines. IBM runs the FAST engine on each node to provide compression, data filtering and ACID compliance on the Netezza systems. The real benefit of FAST is that the FPGA technology allows the engines to be custom tailored to the instructions that are being sent to them for processing. The compiler divides the query plan into executable code segments, called snippets, which are sent in parallel to the Snippet Processors for execution. The FAST engine is able to customize the filtering according to the snippet being processed.
IBM’s Cognos, Data Stage, and InfoSphere Big Insights software products are included in the offering. IBM’s goal is to provide a total warehouse solution, from ETL to final data presentation, to Pure Data Analytics users.
In addition, IBM also provides industry-specific warehouse offerings for banking, healthcare, insurance, retail and telecommunications verticals. IBM’s “industry models” are designed to reduce the time and effort needed to design data warehousing systems for the organizations in these selected business sectors. IBM provides the data warehouse design and analysis templates to accelerate the data warehouse build process. IBM consulting assists the customer to tailor the architecture to their organization’s unique business needs.
Non-Database Vendor Technologies
New “disruptive” products that compete with the traditional database vendor offerings continue to capture the market’s attention. The products range the spectrum from No-SQL products that provide easy access to unstructured data to entirely new architectures like Apache’s Hadoop.
Major database vendors will make every effort to ensure that disruptive technologies gaining market traction become an enhancement, not a replacement, for their traditional database offerings. Microsoft’s APS platform is an excellent example of this approach.
Apache’s Hadoop is a software framework that supports data-intensive distributed applications under a free license. The Hadoop software clusters’ commodity servers offer scalable and affordable large-data storage and distributed processing features in a single architecture.
A Hadoop cluster consists of a single master and multiple worker nodes. The master provides job control and scheduling services to the worker nodes. Worker nodes provide storage and computing services. The architecture is distributed, in that the nodes do not share memory or disk.
A distributed architecture allows computing horsepower and storage capacity to be added without disrupting on-going operations. Hadoop’s controlling programs keep track of the data located on the distributed servers. In addition, Hadoop provides multiple copies of the data to ensure data accessibility and fault tolerance.
Hadoop connects seamlessly to every major RDBMS through open-standard connectors providing developers and analysts with transparent access through tools they are familiar with. When used simply as a large-data storage location, it is accessible through a variety of standards-based methods such as FUSE or HTTP. Hadoop also offers an integrated stack of analytical tools, file system management and administration software to allow for native exploration and mining of data.
The Hadoop architecture is able to efficiently distribute processing workloads amongst dozens and hundreds of cost-effective worker nodes. This capability dramatically improves the performance of applications accessing large data stores. Hadoop support professionals view hundreds of gigabytes as small data stores and regularly build Hadoop architectures that access terabytes and petabytes of structured and unstructured data.
One of Hadoop’s biggest advantages is speed. Hadoop is able to generate reports in a fraction of the time required by traditional database processing engines. The reductions can be measured by orders of magnitude. Because of this access speed, Hadoop is quickly gaining acceptance in the IT community as a leading alternative to traditional database systems when large data store technologies are being evaluated.
As stated previously, there is an endless array of offerings that focus on addressing large data store challenges. Large data store architecture selection is the most important decision that is made during the warehouse development project. A correctly chosen architecture will allow the application to perform to expectations, have the desired functionality and be easily monitored and administered. Incorrect architecture decisions may cause one or more of the following problems to occur: poor performance, limited functionality, high total cost of ownership, complex administration and tuning, lack of scalability, poor vendor support, poor reliability/availability and so on. All market- leading database vendors understand the importance of addressing the challenges inherent with large data stores and have released new products and product enhancements designed to simplify administration and improve performance.
Hi, welcome to RDX. Virtualization and cloud technology pretty much go hand-in-hand.
Many popular cloud providers, such as Amazon and Rackspace, use Xen, an open-source virtualization platform to optimize their environments.
According to Ars Technica, those behind the Xen Project recently released a warning to those using its platform. Apparently, a flaw within the program's hypervisor allows cybercriminals to corrupt a Xen virtual machine, or VM. From there, perpetrators could read information stored on the VM, or cause the server hosting it to crash. Monitoring databases hosted on Xen VMs is just one necessary step companies should take. Reevaluating access permissions and reinforcing encryption should also be priorities.
Thanks for watching! Be sure to visit us next time for any other advice on security vulnerabilities.
The post Open Source Virtualization Project at Risk [VIDEO] appeared first on Remote DBA Experts.
In the era of big data, database administration services are finding ways to work with NoSQL environments, Hadoop and other solutions.
Yet, these professionals aren't so quick to write off the capabilities of open source relational databases. When Oracle purchased the rights to distribute MySQL, a few critics were a bit skeptical of the software giant's intentions. Evidently, the company intends to refine and improve MySQL as much as possible.
MySQL Fabric 1.5
I Programmer noted one feature that allows DBAs to better manage collections of MySQL databases, dubbed MySQL Fabric 1.5. Through OpenStack, an open source cloud computing software platform used to support Infrastructure-as-a-Service solutions, Fabric 1.5 users can employ a wider range of sharding keys.
One of the most notable functions Fabric 1.5 has to offer is its ability to automatically detect failures and then employ failover through MySQL Replication. Basically, if the server for the master database unexpectedly shuts down, Fabric 1.5 chooses a slave database to become the new master.
For various reasons, more organizations are hosting databases in cloud environments. Managing servers distributed across broad infrastructures requires tools that can quickly identify and assign tasks to particular machines, and Oracle has recognized this.
Christopher Tozzi, a contributor to The VAR Guy, acknowledged that Oracle's Enterprise Manager can now support MySQL, allowing enterprises using the solution to better monitor and administrate database functions for public and private cloud deployments. A statement released by the developer asserted enterprise manager also allows users to migrate to MySQL technology.
Ulf Wendel's contribution
Encryption is a regular component of database security, point-of-sale implementations, network protection and a plethora of other IT considerations.
One protocol, SSL 3.0, was recently deemed sub-par. Dark Reading noted that Google experts discovered a vulnerability in the nearly 15-year-old encoding rule that could potentially allow cybercriminals to initiate man-in-the-middle attacks against users.
What is "man-in-the-middle"?
MITM intrusions are some of the most malicious attacks organizations can sustain. According to Computer Hope, MITM occurs when a user disrupts the path between an entity sending information and the object or person receiving the data.
For example, if Person A delivered an email to Person C, then Person B could initiate a MITM attack, manipulate the message however he or she sees fit, and then transfer the email to Person C. As one can see, this capability is quite dangerous.
A fault for the skilled
Google researchers dubbed the vulnerability (CVE-20140-3566), naming the type of attack a person would launch to take advantage of this fault as Padding Oracle On Downgraded Legacy Encryption, or POODLE. Apparently, a POODLE infiltration would be incredibly difficult to pull off, meaning only the most experienced hackers are capable of using the method to their advantage.
Although SSL was replaced by updated encryption protocols, it's still employed to support antiquated software and older client servers. Nevertheless, these applications and machines likely hold valuable information for many companies, and enterprises should strongly consider consulting database administration services to apply revision and new data encoding processes.
As far as vendor-related services go, Google will remove SSL 3.0 from its client programs, while Mozilla intends to do the same on November 25.
Despite this cause for concern, WhiteOps Chief Scientist Dan Kaminsky assured Dark Reading that it's "not as bad as Heartbleed," but still a consideration companies should take seriously.
The post Database administrators may have to worry about POODLE attacks appeared first on Remote DBA Experts.
Hi, welcome to RDX. RDX has a wide range of platform-specific experience to help keep your database environment highly available and high performance. Our DBAs can help supplement any gaps in skill sets, leaving your internal team to focus on the tasks they do best.
Whether you prefer to store information in SQL Server, Oracle, MySQL, PostgreSQL or Hyperion/Essbase, our specialists provide you with a wealth of expertise and support. Our staff is well-versed in optimizing and protecting all of these environments 24×7, providing your business with a greater peace of mind.
In addition to our varied expertise, we provide clients with the choice of customizing their environments. We’re here to accommodate any of your unique business needs, and our DBA experts are equipped to solve your toughest challenges.
Thanks for watching. Be sure to watch next time.
Hi, welcome to RDX. When a mission-critical system becomes unavailable, it can threaten the survivability of an organization. That’s why RDX has a Database Operations Center team responsible for the proactive monitoring of all clients’ environments, 24×7.
Our monitors are custom tailored for every environment we support, and our specialists are trained in database and operating system problem resolution. This combination delivers peace of mind for our clients when they know the Database Operation Center is watching out for their highly available, high performance, and mission-critical environments. If a major problem does transpire, our experts notify the client immediately – creating a game plan on how to resolve the situation.
Thanks for watching! Next time, we'll discuss our platform-specific solutions.
Welcome to RDX! For those using Oracle Products, Oracle’s October critical patch update contains an unusually high number of security bug fixes.
ZDNet contributor Liam Tun noted that Oracle released patches for 155 security flaws for 44 of its products October 14th. Fixes include 25 security fixes for Java SE. The components affected include Java SE, Java SE embedded, JavaFX and JRockit. The highest Common Vulnerability Scoring System (CVSS) rating among the Java fixes was a 10, the highest rating available.
Also included are 32 fixes to Oracle Database Server products, with at least one receiving a CVSS rating of 9, 17 fixes for Oracle Fusion Middleware, 4 fixes for Oracle Retail Applications, 15 fixes for Oracle Sun Systems Product Suite and 24 for Oracle MySQL.
Many of these vulnerabilities may be remotely exploitable without authentication.
Thanks for watching!
Hi, and welcome to RDX. In this portion of our "services" series, we'll discuss how we provide companies with all of their database administration needs.
With RDX's full DBA support services, we become your DBA team and assume complete responsibility for the functionality, security, availability and performance of your database environments. We know that each company has unique goals and demands, which is why we also implement guidelines and protocols based on your organization's specific requirements.
In addition, we're willing to fill in any DBA role from our offerings that your company may need. You get the expertise and best practices of over 100 DBA experts for less than the cost of a single in-house resource.
Thanks for watching! Stay tuned for other ways to work with RDX soon.
System and database administrators from health care institutions are facing several challenges.
On one hand, many are obligated to migrate legacy applications to state-of-the-art electronic health record solutions. In addition, they need to ensure the information contained in those environments is protected.
Operating systems, network configurations and a wealth of other factors can either make or break security architectures. If these components are unable to receive frequent updates from vendor-certified developers, it can cause nightmares for database administration professionals.
Windows XP no longer a valid option
When Microsoft ceased to provide to support for Windows XP in early April, not as many businesses upgraded to Windows 7 or 8 as the software vendor's leaders had hoped. This means those using XP will not receive regular security updates, leaving them open to attacks as hackers work to find vulnerabilities with the OS.
Despite continuous warnings from Microsoft and the IT community, Information Security Buzz contributor Rebecca Herold believes that the a large percentage of medical devices currently in use are running XP. Her allegations are based on reports submitted by health care electronics producers that stated they leverage XP for the sensors' graphical user interfaces, as well as to create a connection to external databases.
Because Microsoft has yet to release the source code of XP, health care companies using these implementations have no way of identifying vulnerabilities independently. Even if the source code was distributed, it's unlikely that the majority of medical providers could use in-house resources to search for security flaws. The only way to defend the servers linked with devices running XP is to employ database active monitoring.
Public sector experiencing vulnerabilities
Healthcare.gov apparently isn't picture-perfect, either. Fed Scoop reported that white hat hackers working for the U.S. Department of Health and Human Services' Office of the Inspector General discovered that personally identifiable information was secured, but some data controlled by the Centers for Medicare and Medicaid Services lacked adequate protection.
After an assessment of CMS and databases was completed, the IG advised the organization to encode files with an algorithm approved by Federal Information Processing Standards 140-2. However, authorities at the CMS deduced this wasn't necessary.
Although this wasn't the first audit of Healthcare.gov (and it likely won't be the last), the information held within its servers is too valuable for cybercriminals to ignore. Setting up an automated, yet sophisticated intrusion detection program to notify DBAs when user activity appears inconsistent is a step the CMS should strongly consider taking.
The post Public, private health care systems possess security vulnerabilities appeared first on Remote DBA Experts.
Hi, welcome to RDX. With news about data breaches sweeping the Web on a regular basis, it's no surprise that the latest victim was a major U.S. bank.
According to Bloomberg, hackers gained access to a server operated by JPMorgan Chase, stealing data on 76 million homes and 7 million small businesses.
After further investigation, the FBI discovered the hackers gained access to a server lacking two-factor authentication. From there, the hackers found fractures in the bank's custom software, through which JPMorgan's security team unknowingly gave them access to data banks.
To prevent such attacks from occurring, firms should regularly assess their databases and solutions to find vulnerabilities.
Thanks for watching! Be sure to visit us next time for info on RDX's security services.
The post JPMorgan hack joins list of largest data breaches in history [VIDEO] appeared first on Remote DBA Experts.
Have what it takes to enhance your open source databases?
Welcome back to the RDX blog. Whether you prefer to store your information in MySQL or PostgreSQL, we can provide you will a complete range of administrative support.
In addition to 24×7 onshore and remote service, our staff can deploy sophisticated monitoring architectures customized to fit both MySQL and PostgreSQL. This ensures your data is available at all times.
Speaking of accessibility, our experts are well-versed in advanced PostgreSQL tools, such as the new Foreign Data Wrapper. According to Silicon Angle, this function enables staff to easily pull remote objects stored in analytic clusters.
Thanks for watching, and be sure to join us next time.