When outsourcing your database administration to RDX, you have options. We offer two main models for our remote administration services: Full Database Administration and Supplemental Database Administration outsourcing services.
This allows us to tailor our services to your company’s specific needs. With full remote DBA outsourcing, we provide 100% of your database administration needs. Your assigned Primary and Secondary DBAs assume total ownership of your environment and are responsible for your entire environment’s security, performance, and availability.
Supplemental outsourcing is ideal for companies experiencing hiring freezes, or that are simply looking for DBAs to augment their existing staff. With this option, you can choose which specific services you need our team to perform, allowing your in-house team to focus on more important, ROI building activities.
Either way, outsourcing part, or all, of your database administration support will save you time, money and energy your company needs to drive progress forward. Get in touch with us by using the QuickConnect button on the top right-hand corner of your screen, today to learn more.
The post Full vs. Supplemental Remote Database Administration Services [VIDEO] appeared first on Remote DBA Experts.
Many businesses often misunderstand what Hadoop is and how it applies to their business needs. Large enterprises collecting massive amounts of data often encounter database administration problems. In response, they typically hire outside experts who leverage Hadoop to help companies better manage information aggregation.
Where did it come from?
With the plethora of computing applications available today, it's easy for business-minded people to dismiss Hadoop as just another hype-infused solution without ever learning what it actually does. Cloudera CEO Mike Olson cleared the air with Strata blog contributor James Turner, claiming that Hadoop was created by a group of Google engineers trying to figure out how to catalog the textural and structural information the search engine was collecting and present it in a meaningful way.
"Google's innovations were incorporated into Nutch, an open source project, and Hadoop was later spun-off from that," Olson told Turner.
What it can do
Olson proceeded to explain that Hadoop's platform is capable of indexing large stores of data – both unstructured and structured – and running intricate analytics programs that are computationally extensive. For example, financiers can leverage Hadoop's structure to conduct accurate portfolio evaluations, carry out risk scrutiny endeavors and construct sophisticated models.
Database support services often find that utilizing Hadoop allows companies to perform incredibly complex collection and analysis tasks with a lot less computing power. Olson noted that the open source program was built to run on a large number of machines that don't share any memory or disks, enabling DBA to purchase number of disparate servers, assemble them in a rack, and run Hadoop on each one.
"When you want to load all of you organization's data into Hadoop, what the software does is bust that data into pieces that it then spreads across your different servers," informed Olson. "There's no one place where you go to talk to all of your data; Hadoop keeps track of where the data resides."
Where it's headed
Use of the technology is skyrocketing. According to Cloud Times, a report published by Allied Market Research estimated that the worldwide market for Hadoop will grow at a compound annual growth rate of 58.2 percent between 2013 and 2020. The news source noted that burgeoning interest in the platform will be instigated primarily by the increasing volume of unstructured data most organizations are trying to process.
DBA services well versed in the technology can be a major boon to companies looking to make use out of Hadoop. Enterprises that choose to adopt it are sure to have a competitive advantage over those that do not.
The information collected by application developers, retailers and other organizations supersedes the basic, numeric form characteristic of an Excel spreadsheet. Documents containing written language hold a large amount of intelligence that can greatly assist businesses and public authorities looking to obtain insight into various situations. In order to harness this unstructured data, database experts have maintained that a switch to non-relational databases is imperative.
No easy task
CIO Magazine noted the problems Australian Crime Commission CIO Dr. Maria Milosavljevic have encountered when trying to scrutinize massive amounts of forms that include written language. Police reports, psychological evaluations and other documents are available for use, but assigning a team of employees to search for patterns among thousands of items seems illogical in the face of data analytics.
However, some of the more common programs aren't sophisticated enough to conduct such complex tasks. Milosavljevic noted that businesses can assign specific tasks to information scrutiny programs, but even these initiatives can become diluted by complications.
"My favorite example is a movie review that said a film was 'wonderfully horrid,'" noted Milosavljevic at the CeBIT conference in Sydney, Australia, as quoted by the source. "Wonderful is a positive word, horrid is a bad word. But 'wonderfully horrid', that's interesting. And a machine is not going to get all that right, but you all did because you laughed."
A constructive environment
Of course, this information comes from a number of distributed sources. Companies and individuals specializing in developing smartphone applications have experienced database administration woes because of the amount of text-based information that is exchanged between users. Maria Deutscher, a contributor to SiliconANGLE, noted that relational architectures – popularly known as conventional databases – were created with the intent of supporting transactions based on numeric intelligence.
Unstructured data expands workload density, making it difficult for information to be transferred between machines. Deutscher acknowledged that Couchbase, a startup focusing on open source database development, is currently seeking a solution to this problem. The organization's mobile suite is capable of running locally, eliminating the need to transmit data to a remote server. One would think that this process would result in decreased functionality, but the features application developers require on a daily basis still remain strong.
Organizations looking to make sense of the massive number of written documents at their disposal should take a page from Couchbase's philosophy and look for flexible environments capable of supporting analysis endeavors.
The post Unstructured data marks shift to non-relational databases appeared first on Remote DBA Experts.
Trying to find the right database platform for your company can be an overwhelming task, but it’s still important to put some thought into choosing a specialized database platform to ensure the maximum benefit in the long run.
Faircom Vice President Randal Hoff said in a report on TechRadar.com, that first, you need to understand your business needs – the amount and type of data you’ll be harnessing, storing and analyzing. Also consider whether you have data sources kept in legacy databases and applications, which often have been developed by different coding, making the extraction process difficult.
It’s also very important to think about the future of the company. This alone could dictate whether you need one database platform over another – or the integration of both. Sometimes, companies may require more than one platform to handle different tasks, such as handling customer reviews and comments on one and processing purchase transactions on the other.
Whatever your database platform needs may be, our team of database administration experts can help you worry less and ensure that your database environments are highly available and high performance 24×7.
The post How to Choose the Right Database Platform for Your Company [VIDEO] appeared first on Remote DBA Experts.
Experts have lauded in-memory databases (IMDBs) for their ability to improve data mining and analysis endeavors. Database administration professionals have encountered IMDBs more often now than in recent years, primarily because big data analytics have become so popular with businesses.
Analytics is a priority
In contrast to conventional disk storage mechanisms, IMDBs execute few central processing unit instructions, which drastically cuts down on the amount of time it takes enterprises to run information analysis applications. Mike Matchett, a contributor to TechTarget, noted that traditional transactional databases only held critical intelligence records in memory, while cheaper disks were used to store low-priority volume. However, modern thirst for web-based applications and real-time analytics has made IMDBs a more practical choice.
Today, database experts often find that the majority of structured databases need to be repetitively queried, which can be a slow, arduous process if the interactions occur outside of the server. About 20 years ago, this conundrum wasn't so ubiquitous, primarily because organizations weren't running complex analysis systems. To accommodate these programs, columnar machines were designed to hold optimized storage formats, but still proved inadequate in the face of a high-volume of transactions.
Creating appropriate environments
In order to accommodate these complex analysis solutions, German developer SAP released HANA, a scale-out in-memory database constructed to hold essential enterprise resource planning data. The structure is capable of providing near-real-time business intelligence and can finish reports that may take hours to run over an anachronistic transactional database in seconds.
According to PC World, many database support services are expecting that Oracle is set to unravel a new line of in-memory database products that take storage and processing beyond SAP's initial investments. On June 10, CEO Larry Ellison will host a webcast titled "The Future of the Database Begins Soon," and discuss what the the IMDB means for current and future Oracle customers.
"It will be possible to perform TRUE real-time, ad-hoc, analytic queries on your organization's business data as it exists at that moment and receive the results immediately," wrote Ellison, as quoted by PC World. "Imagine your sales team being able to know the total sales they have made as of right now – not last week, or even last night, but right now."
This level of operability is sure to be a boon for enterprises looking to monitor their progression. It gives executives the abilities to make decisions based on what's occurring and influence outcomes by inciting game-changing factors.
The post For analytics users, in-memory databases may prove useful appeared first on Remote DBA Experts.
We know the whole point of outsourcing your database administration is to save money without sacrificing the quality of your work, so at RDX we use our “Collective Knowledge Approach” – leveraging the 1000s of years of collective experience our team has – to solve your problems faster and more efficiently.
When you outsource your DBA with RDX, you’re not paying for a single DBA resource – you’re buying 24/7 access and support from an entire team of onshore DBA experts – all for less than the cost of one in-house resource.
Your Primary DBA is your go-to contact, knowing your unique needs and support requirements. Now, whenever you have a complex task, your DBA calls on the Subject Matter Expert with the highest skillset to specifically address your need. And behind the scenes, our PMRC team is constantly monitoring your environments to help prevent database problems before they happen.
When you’re outsourcing your DBA support, you want to make sure you get the highest quality support possible. And at RDX, our team has got you covered 24/7.
Database administrators, by the very essence of their job descriptions, are the protectors of their organization’s core data assets. They are tasked with ensuring that key data stores are safeguarded against any type of unauthorized data access. Ensuring that data is protected on a 24 x 7 basis is a complex task. External intrusions and internal employee data thefts combine to make many IT professionals lie awake at night thinking about how they can secure their sensitive database data stores.
Data breaches threaten the survivability of any organization. The financial impact of the breach is not the only issue that affects companies that are victims of unauthorized data access. Bad press, fines, legal costs and loss of customer goodwill must be also factored into the breach’s total impact on the organization.
Significant data breach announcements are publicized on a daily basis. External hackers and rogue employees continuously search for new ways to steal sensitive information. There is one component that is common to most thefts – the ultimate target of that breach is the database data store containing sensitive information.Protecting Critical Data Assets
Hackers can be classified as intelligent, inquisitive, patient, thorough, driven and often, successful. This combination of traits makes data protection a formidable challenge. Those responsible for data protection have a wealth of hardware and software offerings available to them that assist in the monitoring and protection of their organization’s sensitive data stores. Data security teams are tasked with creating protection strategies based on a combination of procedures, controls and product offerings.
These product offerings range the spectrum, from Anti-Virus and Security Information and Event Monitoring software to state-of-the-art firewalls and perimeter monitoring hardware components. The seemingly endless array of hardware and software protection offerings available allows security units to interweave the products and procedures together to craft protection strategies that are custom tailored to their organization’s security needs.Database Activity Monitoring
The ultimate target for many breach attempts is the database. It is widely known to be the container for the organization’s most sensitive data assets. In the past, database administrators have combined product vendor supplied security mechanisms with traditional security products to protect their database environments. Although the major database vendors continue to integrate strong security features into their products’ architectures, third-party offerings in this space have been somewhat limited.
A new breed of products loosely grouped together and defined as “Database Activity Monitoring” (DAM) has recently begun to gain traction with the database administration community. Database Activity Monitoring allows organizations to gain visibility into all database activity including local privileged access and sophisticated attacks from within the database itself. Monitoring helps administrators protect their most valuable and sensitive data from external threats and malicious insiders by alerting them to attacks as well as terminating sessions that violate predefined security policies.
Database Activity Monitoring focuses on the evaluation of the SQL statements accessing the data from a security perspective. Much like a parsing engine that checks the syntax of a SQL statement to ensure it is syntactically correct, the software analyzes the statement and compares it to a predefined set of security rules.
Depending on the vendor utilized to provide Database Activity Monitoring, the combination of rules can be as complex or as simple as needed to protect the database. Most offerings allow administrators to combine security rules that consist of:
- Database account executing statement
- Application program sending statement to database for execution
- Host name, IP address of computer accessing the database
- Time of day
- Type of access being performed (SELECT, UPDATE, INSERT DELETE)
- Data Definition Language statements (CREATE, DROP, ALTER)
- Data Control Language (GRANT, REVOKE)
The software then allows the administrator to take preventative action for statements that violate the predefined security policies. Depending on the product used, preventative action could include logging the statement, generating alerts or blocking the statement from execution. Some vendor offerings also allow the administrators to quarantine the user, which prevents them from issuing any additional statements that access the database.
The major offerings also provide out-of-the-box compliance and vulnerability reports. The Database Activity Monitoring software has preprogrammed rules for most major regulatory requirements including SOX, HIPAA and PCI. Vulnerability scans allow administrators to quickly identify potential issues and take the steps necessary to remediate or mitigate their impact.
The general vulnerability scans test for weak passwords, default accounts and other breaches of best practices that may result in a vulnerable database. Scans can be scheduled to run on a regular basis to ensure that no new vulnerabilities are identified.Remote DBA Experts’ (RDX) Database Activity Monitoring Service
As a remote database services provider, RDX understands the important role security plays in our customers’ environments. Our customers have entrusted our organization to help safeguard their critical data stores.
RDX completed an extensive evaluation of various database protection products and has selected to partner with McAfee Security to provide a database security monitoring offering to our customers. A thorough evaluation of a Database Activity Monitoring vendor is critical to any shop desiring to implement Database Activity Monitoring. RDX performed a traditional vendor analysis that assigned weights to evaluation criteria based on their importance to our organization. Because RDX supports a wide range of database products, the product was required to support as many different vendor offerings as possible. Some of the additional criteria RDX used in its evaluation follows:
- Track record as a proven security software provider and presence in market place
- Wide range of SQL statement security monitoring options (type of access, user, time-of-day, etc.)
- Limit the performance impact on the database, transactions and individual SQL statements being analyzed
- Ability to quarantine users, not just block their statement from executing
- Strong vulnerability scanner option that included predefined rules for SOX, HIPAA and PCI
- Robust reporting and alerting features
- Internal security mechanisms that prohibit product tampering and unauthorized access to the Database Activity Monitoring data store
- The ability to virtually patch systems to quickly protect databases from new exploits until the database vendor supplied hard patches become available
- Ease-of-installation and debugging
After the tool is purchased, an implementation project is then initiated to tailor the product to meet the organization’s Database Activity Monitoring needs. The implementation can be decomposed into the following tasks: educating the application owners and security teams on the features that the Database Activity Monitoring product provides, needs analysis, customizing the product, defining alert notification and escalation procedures, initial vulnerability scans and ongoing monitoring. RDX’s implementation consists of the following phases:Customizing the Security Offering
RDX works with each customer to design a security service offering that meets their unique security monitoring needs:
- Detailed discussions are held during integration on the type of data to be protected as well as the customer’s current internal security practices and product sets. The intent is to allow RDX to fully understand each customer’s overall security requirements and current controls.
- RDX’s security services are discussed in-depth with each customer to ensure they fully understand RDX’s data protection strategies.
- RDX works with customers to agree upon the installation and configuration of the security monitoring architecture.
- RDX identifies critical customer software products being utilized to create weekly vulnerability and new threat notifications.
- Criteria used to generate alert event notifications is agreed upon.
- Alert event notification and escalation procedures are finalized.
- Initial and ongoing vulnerability scans are scheduled.
The initial vulnerability scan provides a detailed security analysis of the database being protected. Its intent is to identify current vulnerabilities to allow RDX and the customer to quickly address them.
- The scanning process automatically finds databases on the network, determines if the latest patches have been applied, and tests for weak passwords, default accounts, and other common threats — making it easier to demonstrate compliance to auditors and improve protection of critical data assets. The scanner conducts a check of more than 3,500 potential database vulnerabilities.
- Scanning templates for PCI DSS, SOX, HIPAA and other regulations can be utilized, as well as specific reports for various stakeholders such as DBAs, developers and security teams.
- The scanning output provides remediation recommendations for most high-priority vulnerabilities.
- RDX’s database and operating systems experts will present the scanning results to customers and provide valuable insight into the security flaw, the type of exposure or vulnerability it presents. RDX works with each customer to mitigate or completely resolve the vulnerability issue.
- RDX will then schedule the scanner to run on a quarterly basis to ensure that no new vulnerabilities are identified
24×7 Database Activity Monitoring is the heart of any Database Activity Monitoring implementation
- A small client-side monitoring sensor is installed on each monitored target.
- The sensors send alert notifications to a central console installed at the customer site.
- RDX works with each customer to create security event identification criteria to alert or terminate sessions based on, but not limited to:
- Application program
- IP or host name
- Time of day
- Type of statement (SELECT, INSERT, DELETE, TRUNCATE, UPDATE etc..)
- The alert notifications are forwarded to RDX’s Proactive Monitoring and Response Center (PMRC) for processing.
- RDX’s PMRC Team customizes the alert event notification and escalation procedures to meet the customer’s security needs.
- RDX will configure self-defense mechanisms to detect product tampering and trigger tampering alerts.
- The PMRC team constantly reviews alerts generated during initial implementation to reduce alerting “noise”. Discussions are held with customer to reduce unwarranted notifications.
- RDX ensures that all monitoring components are functioning as expected and performs system upgrades to ensure that all new features are quickly leveraged by each customer.
The intent is to provide customers with an ongoing service offering that focuses on the protection of sensitive database data stores:
- Security Analysis Services
- Configure custom Database Activity Monitoring Reports
- Work with customer to add or change database specific alert event notification criteria
- New Threat Analysis – Identify customer IT product sets during integration and create new vulnerability notifications that may affect those products
- DBAs and OS engineers will provide detailed insight to alert notifications when requested
- Scan for new databases added to network
- Perform quarterly vulnerability scans to ensure that recent changes have not increased exposure to security threats
- Database and OS patching services
- Upgrade database and operating system to new release
- Apply database and operating system security patches
- Provide advice to customer personnel on patching and upgrades
Protecting against unauthorized data access must be an ongoing process. RDX is acutely aware that those individuals desiring to exploit data stored in computer systems for personal means are constantly changing and improving their data access strategies. The fluid nature of external and internal attacks requires DBA units to leverage time-tested database security best practices but also actively seek out and implement hardware and software products that can be interwoven into an effective database protection strategy. One of those key tools in your DBA toolbox is the database activity monitor. If you have sensitive data, you can choose to implement the product on your own or take advantage of our services.
Regulations dictated by the Affordable Care Act and the Center for Medicare and Medicaid Services' (CMS) Meaningful Use laws will likely require health care organizations to seek database support services. Stage 2 of the latter legislature requires hospitals, clinics, insurance companies and other entities partaking in the medical industry to adopt electronic health records, so that it's easier for professionals to share patient information – the hope being better, cheaper care for individuals.
Abiding by mandates
This massive transition toward EHRs has prompted health care companies to take two factors into consideration. First, in order to remain complaint with the Health Insurance Portability and Accountability Act (HIPAA), robust security measures must be applied to data storage, servers and facility networks. Second, professionals must find a way to manage a system in which a doctor can access, create and edit information from multiple devices. For these reasons, many have regarded IT outsourcing as a viable option, turning to database administration professionals to oversee system management
Handling such operations in-house often comes across as a challenge for companies that have focused more on patient care than developing computing capabilities, but it's what the federal government mandates. According to iHealthBeat, the CMS recently published a notice in the Federal Register that it plans to assess the readiness of organizations looking to adhere to meaningful use laws. A database containing reports on each entity obligated to abide by the CMS' mandates will be launched after reviews are completed.
Diving into big data analytics
The CMS' laws provide the health care industry with a unique opportunity to capitalize on the benefits associated with information analysis. InformationWeek noted that metrics can help organizations scrutinize costs, assemble faster diagnoses, adjust managerial practices, improve workflow and so much more. To get the most out of such technology, data could be collected from EHRs, wearable devices such as FitBits and numerous other sources.
"We, as a society, need to start creating our own metrics for how health care quality is defined," Dr. Anil Jain, senior vice president and chief medical officer at Explorys told InformationWeek in an interview.
Ten years ago, integrating analytics into hospital practices was largely impractical. However, the advent of cloud computing, outsourced DBA services and greater demand for the technology have driven down costs. It's time that the health care industry benefits from it.
The post Health care regulations may boost need for DBA services appeared first on Remote DBA Experts.
In a recent move, Virtual-Strategy.com reports that Boston Medical Center has upgraded its database environment and moved it onto Oracle Database Appliance to improve efficiency and performance. The academic medical center affiliated with Boston University aims to fully support, scale and optimize its clinical and operational data environment, Virtual Strategy reports.
By embracing a more modern engineering solution, BMC will be able to save both time and money, reducing annual storage costs and management time with multiple vendors. Oracle Database Appliance has already provided the medical center with greater CPU and RAM usage and has allowed BMC to consolidate its data management team – meaning engineers can focus on other high-priority projects.
Solutions like Oracle Database Appliance can help hospitals modernize their databases to achieve faster, more insightful analytics on the quality of their treatments and care.
The post Boston Medical Center uses Oracle for higher performance [VIDEO] appeared first on Remote DBA Experts.
There has been tons of talk since the White House Big Data Report came out last week on the implications of big data in the healthcare and education industries, personal security and other important privacy issues.
Among these, de-identification and perfect personalization stand out as things to watch for, as technology companies become increasingly more capable of linking device users with personal information. The question in hand here is how people will be able to control their online identities. Even if organizations using privacy protection technology de-link “anonymous” data from a specific person or device, advancing technologies are making it easy for this information to be re-linked.
Perfect personalization, which allows marketers to deliver exactly the right message to customers, could also potentially leave room for discrimination based on ethnic names or cultural preferences.
The important thing is to be aware, on both a personal and consumer level, of the power and reach big data continues to amass.
The post White House Big Data Report Raises Privacy Questions [VIDEO] appeared first on Remote DBA Experts.
In response to the burgeoning cloud market, Oracle has rolled out a number of applications designed to help companies streamline migration. A number of businesses interested in capitalizing on the benefits associated with the cloud are turning to database administration services to assist them. Yet, there are a few preliminary steps these enterprises should take before submitting to cloud fever.
Get your licensing in order
SiliconANGLE recently attended May's EMC World 2014 event at the Sands Convention Center in Las Vegas, where Principal Engineer for EMC Bart Sjerps discussed why and how Oracle customers are virtualizing their databases. Many are choosing to put these critical production servers in environments such as VMWeb, which allows users to solve any issues regarding support and certification. Deployment strategies aside, he claimed that companies should consider the legalities of the situation before proceeding.
"Oracle is very keen on licensing," he told EMC World attendees, as quoted by the source. "Make sure that your license are in place, that you're compliant with your Oracle policy … if you're not in compliance, then you might have some severe issues later."
In order to get a better perception of what these bylaws entail, it may be favorable for businesses to hire Oracle experts to provide them with special insight.
One of the key reasons why corporations consider shifting to a cloud solution in the first place is due to a promise of greater operability. Robin Birtstone, a contributor to The Register, recommended establishing robust baselines for existing performance models and figuring out why operations not where decision-makers would like them to be. It's important to determine what kind of processes a company would like to benefit from once the transition to the cloud is complete.
Analyze the applications
The next step is figuring out how all of the applications, query loads and central processing units will interact with the new environment. Considering the latter element, Sjerps stated that database administration professionals often find that traditional, physically deployed servers possess very low CPU utilization. Keeping these machines running and managing them in a tangible format can be quite expensive when costs are factored into the equation.
"One of the things you can do with virtualization is basically glue all those CPU resources together," said Sjerps.
Sjerps noted that such a technique will enable remote DBA personnel to move workloads and set CPU shares so that companies can drive utilization. If the aforementioned considerations are taken into account, a business will be ready for cloud migration.
Enterprises choosing to remain with legacy data centers won't be able to take advantage of the Internet of Things. Database administration professionals would agree that the large amount of information being generated by Web-connected devices, ranging from smartphones to automobiles, cannot be contained or properly analyzed in a traditional system.
According to CIO, experts at EMC have stated that cloud computing offers solace to those aggregating large troves of intelligence. The scalable storage volume offered by the technology allows IT departments to collect diverse data sets and run analysis programs to translate the ones and zeroes into actionable information.
EMC executives assembled at the company's world conference in Las Vegas earlier in May, claiming that the "2nd platform," which consists of client-server technology, will be superseded by a "3rd platform" of computing spearheaded by big data, cloud deployments, mobile devices and social networking. EMC has combined its storage business with VMware, RSA Security and its own Pivotal division to help businesses make the transition.
EMC President Jeremy Burton referred to this cohesion as a "federation," claiming that the face of IT is changing rapidly to adapt to new data collection techniques. What it foreshadows is an environment of outsourcing, in which businesses hire entities providing database support services and other forms of IT backing.
Can they defend the hub?
It seems that whenever cloud computing is mentioned in a conversation, ruminations regarding protection are expected to transpire. Though any IT department is bound to be apprehensive of transition to an environment that bypasses traditional platforms, it should be noted that how secure a cloud server is largely depends on the priorities of the company using it.
ITProPortal referenced a survey of 250 enterprises conducted by SkyHigh, which found that 3,571 different kinds of cloud services were being used among the study's participants. Of those products, a mere 16 percent possessed multi-factor authentication and 11 percent had data encryption features. Nearly 30 percent of respondents encountered hacking attempts from outsiders.
From what can be gathered from this study, it's not cloud computing that poses a threat, but the priorities set by the businesses leveraging it. Though database administrator services can provide enterprises with adequate protection, benefiting from their support depends on whether or not companies are willing to outsource to them. In addition, concern regarding cloud defense has spawned a new branch of IT that specializes in cloud security techniques – another option enterprises could capitalize on.
I support a system that uses third-party software. After a recent application upgrade, I began receiving sporadic 8623 errors. These began just once every few days and quickly escalated to 3-4 per day. The error I was receiving was 8623, Severity 16, State 1.
DESCRIPTION: The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information.
I understood that this was a complex query but struggled with finding the source. I was using SQL 2008 R2 so the extended events available in SQL 2012 were not an option for debugging. (If you are on this version, I recommend this article by Jason Brimhall: http://jasonbrimhall.info/2014/01/02/day-9-queries-going-boom/)
For me, the only option I had was to try to capture this in a trace. The error was occurring multiple times per day but not at consistent times. I setup a server-side trace for SQL and waited for the error to occur.
I won’t go into a lot of detail on server-side traces as there are lots of examples of this on various websites.
I created my profiler trace and exported the definition. In SSMS, I made minor changes to the code before starting the trace. I added a filecount parameter (50) and a stoptime parameter (8 hours in the future). This would give me 50 rolling trace files to review and would keep the trace from consuming excessive space on my server.
I then changed the sp_trace_create to include these new parameters.
I executed the code to start the trace and waited..and waited…and waited. Once the error occurred, I stopped the trace so I wouldn’t lose the data I had collected.
I first found my TraceID:
and stopped the trace:
I then queried the trace files looking for the error:
This showed me the error but not much more information (hostname, loginname, database name were removed for security purposes). I looked at the surrounding rows, before and after, and was still not able to find the issue. Since I had the SPID though, I was able to search through all the trace files for that SPID:
I found the row shown above and pulled the TextData. This query ran for almost 2 minutes and ended just after the 8623 error. In this query, I was able to see the issue quickly in the WHERE clause.
At this point, I had found the source of the error and had code to send to the vendor.
For my final cleanup, I closed the trace and manually deleted the .trc files:
SQL doesn’t provide an easy way to identify these rogue queries, but the data is there if we know how to extract it.
Thank you for reading. Feel free to leave a comment below, and stay tuned for my next post in the coming weeks.
The term ‘big data’, which describes the massive collection of data sets from traditional and non-traditional sources used for market discovery, analysis, trending and forecasting, seems to be the buzzword these days.
Consumers unknowingly contribute to 'big data' when they shop on their iPhones, or buy a plane ticket online. Businesses track these transactions and interactions and use it for analysis and mining to learn more about their audiences and to create customer-centric strategies involving spending/sales trends, interests, popularity and rewards programs to name a few. Even the UN has urged for the use of big data in pushing forward global development goals.
For businesses, big data should be maximized and used to its full potential. But this requires more than just great analytics – it requires strategy, security, and platform management.
A recent article in Forbes.com recommends that businesses create a corporate culture that understands, inside and out, the big data their company is obtaining, making sure security is embedded into all daily processes, and investing in a platform that will allow them to use their big data to make better business decisions and to compete in their business space.
CIOs throughout the United States often outsource to remote database support services to provide their companies with mobile device management systems. However, getting employees to adhere to security protocols is a challenge in and of itself. This difficulty is causing executives to wonder whether or not they should reward their workers for participating or reprimand them for failing to do so.
Popular, but risky
The Oklahoman reported that even computers disconnected from the Internet are at risk of obtaining a virus. Davis Merrey, owner and chief executive of TeamLogic IT, informed the source that the church he regularly attends in Oklahoma City had to rebuild its PC after someone uploaded corrupted files from a mobile device onto the machine.
Although the idea of accessing work-related documents from anywhere at any time is appealing to both business leaders and their subordinates, adhering to security policies and regulating access remains a challenge. As a result, database administration experts have recommended that enterprises distribute their own smartphones or tablets – which are typically regulated by DBA professionals – to employees. However, such a process can be quite expensive for businesses with a limited budget.
Some organizations have simply chosen to ban bring-your-own-device practices altogether. An anonymous mechanic working at Tinker Air Force Base claimed that the government's concern regarding cyber warfare has caused officers to prohibit airmen and employees from plugging electronic devices into the base's computers.
To tempt or punish?
Tom Kaneshige, a contributor to CIO, noted that companies still want to benefit from the cost savings and enhanced operability associated with BYOD. In order to do so, executives are realizing that they need to make those working beneath them care more about database security. Kaneshige referenced a survey of 500 employees conducted by Centrify, which showed that 43 percent have accessed sensitive corporate information while connected to an unsecured public network.
Due to this apparent apathy, some CIOs are outsourcing to database experts who can erase unauthorized applications and data held on smartphones or tablets or prevent them from obtaining enterprise data via certain connections. However, this process contradicts the reasons why companies allow their employees to participate in BYOD in the first place.
Josh Bouk, vice president of sales and marketing at Cass, told Kaneshige that his company offers its workers a monthly stipend in exchange for enrolling in Cass's BYOD policies. Employees undergo eligibility screening and then are granted permission to access the organization's databases.
A couple weeks ago, my colleague, Brandi Dollar, wrote a blog post about SQL Server transaction log basics. Her post is a great lead-in to a script that I wrote to solve a common problem, high VLF counts. An important piece of managing your database transaction logs is keeping the number of virtual partitions within the log file, the Virtual Log Files (VLF for short), low. A high VLF count is typically a result of running with the default auto grow settings. As the transaction file continues to grow at sub-optimal levels, the fragmentation will become worse and worse. High VLF counts can have an impact on several performance issues.Fixing a database with a high VLF count is a simple process:
- Check the current size of the transaction log.
- Backup the transaction log.
- Shrink the transaction log to as close to 0 KB as possible.
- Check that the VLF count is less than 50 (best if less than 10 at this point).
a. If the VLF count is greater than 50, repeat steps 1 through 3.
b. You may also have to check that there are no active transactions, etc. The log_reuse_wait_desc column in sys.databases will help identify why the log file cannot be shrunk.
- Grow the transaction log back out to the original size
When I log into a new server environment for a client, it is pretty common to see multiple databases that have high VLF counts. The steps above are easy enough to complete when you have one or two databases that need fixed, but it becomes a very tedious process when there are 20, 30, 40 databases that all need fixed.
To save a lot of the leg work involved, I wrote a script to generate all of the necessary commands. When you run the script below, it will print out all of the commands necessary to reduce the VLF count. Take the output from the messages tab, and you can run this result in phases in a new query window. I probably could have taken the script step a step further to completely automate the process; however, I still like to have control over the process and monitor the completion. Also, rather than writing my own transaction log backups, I will typically run an existing maintenance plan job as needed.
The output of the scripts below will come out similar to:
/**************************************** * Fix High VLF for database: MattWorks * Starting VLF count: 219 ****************************************/ -- Step 1: Get current log file size use [MattWorks] SELECT name AS [File Name] , physical_name AS [Physical Name], size/128.0 AS [Total Size in MB],size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS [Available Space In MB], type_desc FROM sys.database_files; -- Step 2: Take transaction log backup if needed, and then shrink the log file DBCC SHRINKFILE (N'MattWorks_log' , 0, TRUNCATEONLY); -- Step 3: Check if current VLF count is less than 50. -- If not, take another log backup and repeat step #2 until VLF count is less than 50. check for active transactions, etc. DBCC LogInfo; --Step 4: Grow log file back to original size USE [master]; GO ALTER DATABASE [MattWorks] MODIFY FILE (NAME = N'MattWorks_log', SIZE = 1024MB); GO
I hope that this script can be of use to someone.
/************************************** * Script to generate commands to reduce transaction logs * with hig VLF counts. * written by Matt Nelson, RDX ***************************************/ SET NOCOUNT ON --variables to hold each iteration declare @query varchar(100) declare @dbname sysname declare @vlfs int --table variable used to loop over databases declare @databases table (dbname sysname) insert into @databases --only choose online databases select name from sys.databases where state = 0 --table variable to hold results declare @vlfcounts table (dbname sysname, vlfcount int) --table variable to capture DBCC loginfo output declare @dbccloginfo table ( --resourceid int, --need to add this column for SQL 2012 fileid tinyint, file_size bigint, start_offset bigint, fseqno int, [status] tinyint, parity tinyint, create_lsn numeric(25,0) ) while exists(select top 1 dbname from @databases) begin set @dbname = (select top 1 dbname from @databases) set @query = 'dbcc loginfo (' + '''' + @dbname + ''') WITH NO_INFOMSGS' insert into @dbccloginfo exec (@query) set @vlfs = @@rowcount insert @vlfcounts values(@dbname, @vlfs) delete from @databases where dbname = @dbname end /* Build data file info */ CREATE TABLE #DatabaseFiles ( [database_name] [sysname] NOT NULL , [file_id] [int] NOT NULL , [file_guid] [uniqueidentifier] NULL , [type] [tinyint] NOT NULL , [type_desc] [nvarchar](60) NULL , [data_space_id] [int] NOT NULL , [name] [sysname] NOT NULL , [physical_name] [nvarchar](260) NOT NULL , [state] [tinyint] NULL , [state_desc] [nvarchar](60) NULL , [size] [int] NOT NULL , [max_size] [int] NOT NULL , [growth] [int] NOT NULL , [is_media_read_only] [bit] NOT NULL , [is_read_only] [bit] NOT NULL , [is_sparse] [bit] NOT NULL , [is_percent_growth] [bit] NOT NULL , [is_name_reserved] [bit] NOT NULL , [create_lsn] [numeric](25, 0) NULL , [drop_lsn] [numeric](25, 0) NULL , [read_only_lsn] [numeric](25, 0) NULL , [read_write_lsn] [numeric](25, 0) NULL , [differential_base_lsn] [numeric](25, 0) NULL , [differential_base_guid] [uniqueidentifier] NULL , [differential_base_time] [datetime] NULL , [redo_start_lsn] [numeric](25, 0) NULL , [redo_start_fork_guid] [uniqueidentifier] NULL , [redo_target_lsn] [numeric](25, 0) NULL , [redo_target_fork_guid] [uniqueidentifier] NULL , [backup_lsn] [numeric](25, 0) NULL ) EXEC dbo.sp_MSforeachdb 'INSERT INTO #DatabaseFiles SELECT ''[?]'' AS database_name, * FROM [?].sys.database_files' print '' print '/*********************************************************************************************' print 'Copy results below this line to new query window to execute as needed.' print '*********************************************************************************************/' DECLARE @databasename varchar(max) DECLARE @vlfcount varchar(max) DECLARE @filename varchar(max) DECLARE @physicalname varchar(max) DECLARE @totalsize varchar(max) DECLARE @availablespace varchar(max) DECLARE @typedesc varchar(max) DECLARE dbcrsr CURSOR FOR SELECT dbname, vlfcount FROM @vlfcounts WHERE vlfcount >50 --comment this line to print statements for all databases. AND dbname not in ('tempdb','master','msdb','model','reportservertempdb') ORDER BY dbname OPEN dbcrsr FETCH NEXT FROM dbcrsr INTO @databasename,@vlfcount WHILE @@fetch_status = 0 BEGIN DECLARE filecrsr CURSOR FOR SELECT name, size/128 From #DatabaseFiles Where database_name = '['+@databasename + ']' and type_desc='LOG' OPEN filecrsr FETCH NEXT FROM filecrsr INTO @physicalname, @totalsize --Generate print statements to execute in new window WHILE @@fetch_status = 0 BEGIN if @totalsize=0 SET @totalsize=1 print char(10) + '/****************************************' print '* Fix High VLF for database: ' + @databasename print '* Starting VLF count: ' + @vlfcount print '****************************************/' print '-- Step 1: Get current log file size' PRINT 'use [' + @databasename + ']' print 'SELECT name AS [File Name] , physical_name AS [Physical Name], size/128.0 AS [Total Size in MB],size/128.0 - CAST(FILEPROPERTY(name, ''SpaceUsed'') AS int)/128.0 AS [Available Space In MB], type_desc ' print 'FROM sys.database_files;' print char(10) + '-- Step 2: Take transaction log backup if needed, and then shrink the log file' + char(10) + 'DBCC SHRINKFILE (N''' + @physicalname + ''' , 0, TRUNCATEONLY);' print char(10) +'-- Step 3: Check if current VLF count is less than 50.' print '-- If not, take another log backup and repeat step #2 until VLF count is less than 50. check for active transactions, etc.' print 'DBCC LogInfo;' print char(10)+'--Step 4: Grow log file back to original size' print 'USE [master];' + char(10) + 'GO' print 'ALTER DATABASE [' + @databasename + '] MODIFY FILE (NAME = N''' + @physicalname + ''', SIZE = ' + @totalsize + 'MB);' + char(10) + 'GO' print +char(10)+char(10)+char(10) FETCH NEXT FROM filecrsr INTO @physicalname, @totalsize END CLOSE filecrsr DEALLOCATE filecrsr FETCH NEXT FROM dbcrsr INTO @databasename,@vlfcount END CLOSE dbcrsr DEALLOCATE dbcrsr DROP TABLE #DatabaseFiles
Thank you for reading! Stay tuned for my next post in the coming weeks.
Though it generally falls under the umbrella of the IT industry, cloud computing could be considered to be in a league of its own. Database experts are beginning to favor the scalable, flexible deployment option, as it allows them to efficiently accumulate and manage digital intelligence. IT companies are recognizing the popularity of this trend, and in turn helping their clients prepare for or execute cloud migration endeavors.
According to Chris Kanaracus, a contributor to Good Gear Guide, Oracle recently announced that it will allow its customers to transition their subscription plans for on-premise software programs to the company's Software-as-a-Service applications. The migration plan, appropriately named Customer 2 Cloud, enables consumers currently using Oracle's in-house human capital management and customer relationship management tools to switch over to cloud versions.
Many Oracle experts have stated that the business move is likely in response to increased pressure from competitors such as Workday and Salesforce.com. While the latter organization provides its customers with cloud-based CRM systems, the former constructs HCM and enterprise resource planning solutions that operate through SaaS models. Ray Wang, founder and chairman of Constellation Research, noted that many enterprises desire such cloud applications, but are often unsure of how to make the transition.
"Many customers need a defined methodology to go from on-premises to cloud," said Wang, as quoted by Kanaracus.
Oracle: a cloud game changer?
Oracle's Customer 2 Cloud program isn't the only solution the company has recently released that concerns cloud computing. In April, the corporation also released Solaris 11.2, a Unix-based operating system that provides businesses with a viable cloud management suite. ZDNet contributor Larry Dignan acknowledged Oracle CEO Mark Hurd's ambitious goal to spearhead new solutions and applications that will solidify the company's place as the leader of everything cloud. While such an endeavor may come off as a tall order, database administration service organizations have taken note of the latest Oracle projects.
For example, Dignan wrote that Oracle's Marketing Cloud can break down data channels and correlate them with IT and marketing features that can help businesses appeal to customers who exercise some kind of sway over other consumers. The program includes Oracle BlueKai, Oracle Content Marketing, Oracle Eloqua, Oracle Responsys and Oracle Social Cloud, ensuring that an integrated, concise solution will be at the disposal of CMOs throughout the world.
From what can be gathered of these recent initiatives, it's apparent that Oracle intends to assert itself as a serious competitor in the cloud application sector.