With concern for IT security comes assumptions of on-premise databases, cloud computing and the "Big Brother" syndrome that has possessed United States enterprises and citizens. Because data is collected over an incredibly versatile range of sources, constituents are concerned over whether or not their information is exempted - especially on a cloud platform.
According to Standard-Examiner, Barry Conover, executive director of Utah's Tax Commission, claims that the agency has taken appropriate measures, such as data encryption, to ensure that its database is well protected. Conover also noted that Target's credit card fiasco has been followed by six new federal mandates and 54 new clarifications. One regulation requires any state department that uses mobile devices to own them.
Protection from the industry
At a conference in San Francisco, Larry Ellison, CEO of IT developer Oracle, allayed attendee concerns of security breaches, stating that the company has not sustained database infiltration in nearly 20 years. The apprehension was sparked by allegations stating that the National Security Agency has been monitoring Oracle's cloud customers.
"One unique aspect of Oracle's database is that administrators can't look at the data inside," Ellison informed the audience. The CEO cited a multi-platform database support system as one of the elements protecting customer information.
According to CRN, Oracle's partners have claimed that its security is top notch. Rhos Dyke, executive vice president of California-based vendor Cloud Creek Systems, stated that his clients lauded Oracle's ability to collect and manage massive amounts of data originating from variable sources, while still maintaining confidentiality.
Solutions from the executive
To subvert apprehension towards the federal government, President Obama, along with White House administrators, released a list of voluntary guidelines for companies looking to improve database security on February 12.
According to CIO Magazine, the framework, which was initially developed by the U.S. National Institute of Standards and Technology, provides a general agreement on what a quality cybersecurity program looks like. In general, IT security professionals have submitted positive reviews of the guidelines.
Jeff Greene, senior policy counsel at a security software vendor, claimed that the framework is flexible and adaptable, and that both small and large-scale businesses can benefit from it. Another industry professional, Tom Kellerman, claims that although the framework isn't as complex as he would like to be, it establishes a solid foundation. He also noted that health care and financial services firms will be among the first businesses to adopt the executive-endorsed guidelines.
Although cloud migration appears to be the biggest trend in business optimization, in-house databases are adapting to the shifting technological climate. The increased complexity of these intricate infrastructures means that enterprises are looking for database administration professionals to manage their systems off-premise. This outsourcing strategy provides corporations with the ability to consolidate their resources towards delivering their services to customers, and lets remote management teams focus on supporting the operations.
However, it's not coming without necessary adaptation. According to Brian King, a contributor to Datacenter Journal, software-defined data centers are able to offload workloads effectively and capable of dealing with system failures to reduce service downtime. Although the hardware remains necessary, it is witnessing new uses in virtualization, allowing enhanced control and greater agility.
"The data center has become critical to the efficient operation of the modern enterprise," noted King. "Internal projects, external applications, user data and everything else are now entrusted to data centers."
The personnel factor
"The C-level and management responsible for continuous data center operations now have more headaches and concerns than ever before and they rely more than ever on the workplace individuals inside the data center," Datacenter Dynamics reported.
This is one of the many reasons why businesses have continued to utilize remote database management systems. A company specializing in maintaining optimal data center performance has the necessary resources available to give the system their undivided attention. Big data's increasing prevalence has been accompanied by a demand for larger storage capabilities and bandwidth, meaning much more complex data centers. The task may be simply overwhelming for a 12-person IT staff to manage on its own.
Linking communication between the enterprise and companies providing remote database monitoring solutions is the chief financial technology officer. According to Datacenter Dynamics, because the infrastructure of data centers is growing more complex, investors and stakeholders are viewing them as equal in importance to product development and profit margins.
Standardizing the database
The source reported that organizations such as the American Institute of Architects and the Institute of Electrical and Electronic Engineers have recognized that establishing performance requirements has pushed the database into green initiatives as well. Cognizance of eco-friendly practices could potentially help reduce data centers contribution to climate change.
As opposed to acclimating a small or mid-sized IT department to new architectures, remote database management companies already versed in the technology will be able to deploy services much quicker and at a more efficient pace.
For my first blog post, I decided to write about a cool little project that came across my desk a few months ago. The request was to create a SSRS report that could be used to insert records to a database by supplying the user with dropdown parameter values from a list of tables. With a simple stored procedure, you can easily set up a report to insert records to a table, but there was an added requirement for the user to be able to select the site (database) that the values should come from. As the user needed to be able to select the site first, the rest of the parameters needed to be set up depending on which value they selected. You can set up the data sources dynamically using a couple different techniques, but I’ll explain how I approached it.
The first step that I took was creating a table within my DBA database to host the possible databases for each site. (If you don’t have a DBA database set up on your instance, I highly recommend it, as they can be very handy when setting up dynamic packages/reports as well as testing new processes.) I also added a bit “isActive” column to allow users to easily come in and adjust which sites should be used.My Local Table is Set Up and Ready for the Report:
Once you have the table set up, you can begin creating the initial data source to be able to pull back these values. The first data source you want to create will be connecting to the instance and database that you just created your site list table in. Using the below query in your dataset, you can bring back the list of active sites for the user to choose.
Once you have your dataset configured, you will want to create a parameter on the report for the user to be able to select their requested site. You can create this parameter just as you would any other by right clicking on the Parameter folder then going to Add Parameter and filling in your name and Label. For the available values, you will want to add in the data set that you just created with the fields being your Database Names.Configuration for the Database Selection Parameter:
Now that the user has selected their requested site, you can create the dynamic data source to be used for the rest of the parameters. To create this, you will want to add a new data source and then click on the expression button beneath edit.Setting Up the Data Source to Use an Expression:
In this case, all of the databases are stored on the same server so you are able to hardcode in the server name, but you will need to be able to pass in the users selection for the database. To do this, you are able to choose the Parameters category to make the selection. The syntax for your connection string should now look similar to this:
=”Data Source=SERVERNAME; initial catalog=” & Parameters!SelectSite.ValueExpression Set Up and Ready for Use:
You can now use this data source for the remaining datasets to pull back any other parameters that may be needed. I’ve found that you will need to create another temporary data source to one of your databases when setting up your datasets in order to populate the available columns. Once the columns are validated, you can change the connection back to your dynamic data source and everything will be good to go.Username Dynamically Changes Depending on Your Site Selection:
You can also easily modify this expression to dynamically set the Instance your report runs against instead. I have found this technique very useful when creating dashboards, as you can easily select which instance to check. What if you needed the user to be able to select both the SQL instance and the database? Well, you could store the full connection string within an added column in your table and, when setting up your parameter, use that column for your returned Value field. From there, you can adjust your data source expression to pass in the selected value.
I hope you’ve enjoyed my first blog post. If you have any feedback or questions, please leave them in the comments below or look me up on my Google+ profile! Check back soon for other great blog posts from my colleagues and me in the near future.
By now, consumers and business leaders alike should know of the pressing threat that cyberattackers pose to retailers and financial institutions across the world. Following the major breaches at Target and Neiman Marcus this past holiday season, IT decision-makers are scrambling to reevaluate and strengthen their security measures to minimize the likelihood of a harmful attack on their organizations. But while security strategists know that they must take action to avoid the fate of losing customers' data by the millions, the workings of the point-of-sale (POS) malware that executed last year's attacks are still unknown to many. PCWorld pointed to "ChewBacca" malware as the primary variant responsible for the unprecedented magnitude of these incidents.
Playing a high-stakes game with cybercriminals
While there is not a shred of enjoyment to be had when creating defenses meant to fend off cybercriminals, Computerworld likened the process to a game of chess to demonstrate the tactical, high-pressure tasks that IT security experts face every day. On the other side of things, attackers need to stay on their game as well if they want to successfully breach the perimeter of a network. Here is the rundown of a typical attack from the perspective of a hacker looking to access company or customer data:
- Before any information is viewed or extracted, the attacker must first gain access to a company network, getting past that first line of defense. Computerworld mentioned that this can be done via phishing or going to a third-party source to gather usernames and passwords. This process has also been made easier by the widespread use of social media platforms which encourage users to post personal information on a public forum. This is a gold mine for hackers who come across password hints asking for a detail that can easily be found online for free.
- Once access is granted, attackers will find a vulnerable link in a network – ideally one that processes heavy flows of customer information including payment data, names, card verification value (CVV), account numbers, and expiration dates. Computerworld explained that once this data has been taken, it is sent to a drop point within the company's own network before being shipped off to a secure location for the cybercriminals' use. This information is then sold on the digital black market for a premium price.
Now that POS systems have been identified as a weak point in company networks, IT leaders can hopefully strengthen their security measures to prevent future breaches.
The past couple of years have brought considerable investment in telecommunication technologies and remote access applications. Being able to work from home when hazardous weather doesn't permit travel gives both private enterprises and public organizations the benefit of delivering service to customers and constituents. Privacy concerns have accompanied this convenience, driving interest in mobile device management software capable of preventing cyberattacks.
Although employees still operate through PCs, in 2013, mobile phones overtook PCs as the most common Web access device globally, Talkin' Cloud Magazine reported. The article also stated that the ability for employees to access corporate information remotely, at whatever time is most convenient for them, has enabled them to make more educated decisions. Despite this increased flexibility, companies are still wary of security risks. Remote database monitoring systems are able to monitor on-premise data centers, and now software vendors are looking to extend their reach.
"Some employees are sensitive to allowing IT access to their personal mobile devices because they fear the "Big Brother is Watching" syndrome, especially in light of Edward Snowden's recent revelations about the National Security Agency," the article noted.
This paranoia, combined with the enduring usage of remote instruments, has led companies to heavily invest in mobile device management capabilities.
Mergers and acquisitions
The call is being answered by businesses such as Oracle, a company well known for its database management system. According to FierceMobileIT, the software company acquired Bitzer Mobile, a business specializing in mobile application solutions, last November. Suhas Uliyar, Vice President of mobile strategy product management at Oracle, told the magazine that it has since released a couple of products geared towards improving remote access and security.
- Fusion enables companies to manufacture and run business applications while synthesizing with hardware and software infrastructures.
- Identity Manager supplies a single sign-on tool capable of extending an enterprise's web applications to mobile devices.
The company also constructed a new package called Mobile Suite, which integrates mobile applications from the business-to-employee end to the business-to-consumer front. Uliyar claimed that the two greatest challenges of delivering the service were running the application across multiple operating systems and managing access to content that is going to be given to employees bringing their own devices.
The flexibility and business optimization offered through mobile tools can't come without basic security measures. It is no longer appropriate for surveillance to be conducted once or twice a day, but constantly deployed through an application 24/7.
Since mobile technology first came on the scene as a business tool years ago, thought leaders across industries have been extolling the virtues of its capabilities. Businesses were rightfully excited about the possibilities that this highly agile platform could bring to the table, allowing workers to email on the go and remain an active participant in company operations regardless of geographical location. Of course, mobile has come a long way since its origins with antennae-laden cell phones and greyscale personal organizers, but there are still a few concerns that are keeping it back from reaching its full potential.
Integration and compatibility setbacks are still a factor
As the features of mobile technology have advanced rapidly in the past decade, business leaders have enjoyed the many benefits of smartphone and tablet applications to enhance productivity and collaboration across departments and with customers. Yet because the Web was such an early addition in the larger timeline of the platform, more businesses are finding that integration, compatibility and security methods are having trouble keeping up with a IT environment so reliant on interconnectedness.
A recent report from Network World highlighted an Enterprise Strategy Group (ESG) study revealing that mobile technology is still dealing with some growing pains in terms of its integration with enterprise operations as a whole. Furthermore, not nearly enough organizations have created a clearly delineated strategy for implementing mobile in a strategic way, opting instead for loose bring-your-own-device policies that allow for disjointed application use and open networks up to a host of security vulnerabilities. The survey, which analyzed responses from a wide array of industry leaders, showed that less than half of responding companies had taken the time to create a sound mobile device management (MDM) plan that works to mitigate these problems.
Other IT security concerns play into MDM
According to research from ESG, industry leaders agree for the most part on which concerns should be prioritized when it comes to businesses creating an MDM solution for their networks. The opinions reflected in the survey show that the current methodology still needs a bit of work before it can make a serious impact on the way today's companies operate. Thirty-eight percent of respondents said that MDM is difficult to integrate with other security technologies, while 35 percent claimed they had experienced scalability problems with MDM as far as supporting a large number of mobile devices. While the technology is still growing up, it will not likely be going anywhere in the near future.
There is no denying the great range of benefits that a business can reap from implementing mobile devices into its strategies, but without a dedicated security plan to accompany the decision it can be extremely risky, especially for firms that handle a lot of personal financial information. With data breaches occurring more frequently year after year, IT leaders need to be extra cautious when proceeding with mobile integration and bring-your-own-device (BYOD) or potentially suffer the consequences of a cyberattack that could compromise company secrets and customer data or harm internal network processes.
Experts point to mobile one of many IT security concerns
Businesses that pride themselves on protecting their networks to ensure the safety of their data often forget that mobile implementation is an equally important part of any sound security strategy. Indeed, database quality and corollary protection measures are essential to safeguarding company information within the walls of a business network, but sloppy mobile integration policies can create vulnerabilities that IT departments may not anticipate – especially if customers are constantly using their smartphones to make transactions and receive support from a company.
Industry leaders are gradually smartening up to the idea that mobile security needs to be prioritized in this day and age, according to CyberEdge Group's 2014 Cyberthreat Defense Report. Surveying 750 security experts and innovators, the study reported that 60 percent of respondents suffered a cyberattack in 2013, a large portion of whom cited a lack of security expenditure as the main factor. Mobile devices were given the lowest rating, 2.77 out of five, when participants were asked to rate their organizations' ability to defend against attacks through nine different domains, including laptops and social media platforms.
Federal agencies at risk as well
Private enterprise is not the only sector finding itself vulnerable to a host of growing IT security concerns – government organizations using mobile devices also run the risk of a data breach every time they fail to adequately assess their policies. A recently released report entitled "Mobilometer Tracker: Mobility, Security, and the Pressure In Between" from Cisco revealed that six percent of government employees who use their smartphones at work admitted to having lost their devices at some point or other – this adds up to over 3,500 chances for a malicious attack to occur. If an organization plans to let its workers use mobile devices on the job, they had better be prepared with the necessary protection.
While IT departments are often exclusively given the burden of handling database quality measures, maintaining a consistently effective network security strategy is the responsibility of everyone within an organization, especially those in executive positions. Not only are these individuals calling the shots when it comes to devising and implementing these plans, but they need to recognize their influence as role models in the corporate structure. In other words, employees are less likely to follow security protocols if upper-level management is shirking their obligations to adhere to those same policies. With the recent spike in data breaches and other high-profile cyberattacks on retail and financial firms worldwide, there is no better time for the C-suite to get themselves in line with best security practices.
Communication across departments is key
Department siloing is a corporate epidemic that is preventing executives and workers alike from effective communicating with others in their firm, thereby reducing collaborative opportunities, minimizing accountability and decreasing the likelihood that interdepartmental innovations will occur. In a day and age that offers the technology and mobility to successfully open channels of communication between these imaginary walls, this is a massive oversight that executives must work to eliminate, especially if they notice that a lack of correspondence is limiting their firm's ability to perform. An active exchange of information and ideas to improve security strategies needs to become a habit for any company looking to stay ahead of the curve with respect to the evolving threat matrix of today's digital world.
Understanding the role of security
This concept is perhaps most applicable to network security given its role in the protection of data that spans across departments. Leaders in each section must stay communicative and open about their security practices and actively work together in an effort to counteract any weaknesses that may arise. A recent article from Dark Reading highlighted the importance of this practice by exploring the findings of a recent panel in Tyson's Corner, VA entitled "What Every CFO Must Know About Cyber Threats and Security."
"In some ways, security is IT's revenge on the finance department – they say, 'You don't understand what we do, so we'll spend your money however we like," Kevin Mandia, CEO of security forensics firm Mandiant reportedly stated at the panel. "As a CFO, you need to understand something about the security issue."
With respect to proper budgeting and efficient security expenditure, a lack of communication could spell doom for companies that don't care to understand how other departments approach the process.
Here at RDX, I’ve been focused on the Big Data/NoSQL sphere, as we have recently begun offering Hadoop solutions to our clients. I wanted to take a few moments to highlight some of the skills useful for integrating Big Data solutions – they may not be what you expect!
I’m going to take a look from 3 different perspectives – Systems Administration, Software Development, and Leadership – and we’ll take a look at each category as one part in a 3-part series. Since I work with Hadoop, the content will be tilted in that direction. However, if you use MongoDB, Cassandra, or another platform, a lot of this still holds true.
The task of incorporating a new platform usually makes everyone in IT sigh, moan, cringe, or whatever their favorite affectation is. With a good understanding of what you might need from your systems administrators, adding a Big Data/NoSQL component into your environment is easier. The most important skills to have as a Big Data systems administrator are networking, machine environment configuration, and role management. You won’t benefit as much from deep knowledge of centralized storage or virtualization. This is not to say those skills have no value in the world of Big Data, but they don’t come into play as often for reasons I’ll explain shortly. Of course, there are cloud platforms that let you do an end-run around the problem, but that’s a topic for another time…
Networking – It can get complex.
You have computers and you need them connected to one another. You get a nice switch and plug everything in – move along, nothing to see here, right? When it comes to your Big Data stack, it’s not that simple. You’re going to need a good understanding of advanced network topologies and bandwidth aggregation. A Hadoop cluster, for example, needs connections between every node. While you can run a small cluster in a traditional tree configuration, the distribution switch can become oversubscribed, which means you’re not going to see the terabyte-chomping performance that got your attention in the first place. In a larger cluster, cross-connected switches and aggregation tiers come into play, and things get complicated.
Machine Configuration – Consistency is crucial.
A distributed platform depends on needed binary libraries and configuration files being in the correct location across all nodes, with a few exceptions for special-purpose nodes. Unmanaged upgrades can cause pathing issues, leading to unusable nodes or failed jobs. Planning and maintaining a consistent logical structure will ease the burden of managing a cluster. Additionally, a knowledge of tools like Puppet or Chef can speed up deploying tens or hundreds of nodes at a time, should you need to expand your cluster.
Role Management – Layers upon layers.
One of the challenges in the Big Data realm right now is fine-grained security. Turns out, things get complicated when you have an end user passing a message to an application that hands it to a node manager that hands it to 25 nodes, that all hand it back directly to the user. With a Big Data system, you’re likely dealing with a metasystem above bare metal, with an independent set of roles and access. Sometimes the access is drawn from the roles defined on the hosting hardware, and sometimes it is not. The superuser on one layer may not be so super on another – the root account on my development machine can’t even save anything to my test cluster! Make sure you understand who should get to do what where.
Centralized Storage – But we just bought it!
As parallelism is where most Big Data systems gain a performance advantage, storing everything in a central data silo is not ideal. Even RAID runs counter to the design of a distributed file system. In Hadoop, the whole point is that the job moves to the data, eliminating many of the steps involved in shipping data around at compute time. Understand the storage configuration recommendations – resist the urge to hook Hadoop up to your SAN.
Virtualization – A place for everything, and everything in its place.
For some reason, machine virtualization leads to dogma. In forums and discussions, the topic of virtualizing Hadoop nodes comes up often. Fundamentally, this misses the point of distributed architecture. Contention occurs at several points when many nodes are virtualized over one host. Additionally, the fault tolerance designed into a parallel cluster is gone if half your cluster is virtual and the host has a problem. There are so many potential issues related to this, but I’ll end the topic with this – virtualization is great for testing, but beyond that, your platform should be running on the machines themselves.
Throughout Part 1, we’ve looked at some of the most important skills you or your organization will need to launch and integrate a Big Data platform smoothly when it comes to administrating over the platform. Thanks for reading, and keep an eye out for my next installment, covering software development in the Big Data realm. I’m sure there are some people out there who want to discuss what I’ve said – sound off in the comments or reach out to me on Google+! Thanks again!
It is no secret that data breaches are one of the greatest IT security concerns in today's digital landscape. Falling victim to a large-scale attack such as the ones sustained this past year by retailers Target and Neiman Marcus can not only be instantly damaging to a company's reputation, but may continue to wreak havoc on an organization that leaves vital customer financial information exposed to malicious parties. CBS MoneyWatch recently explored the prolonged concerns that come with a data breach in a world reliant on the convenience of e-commerce.
A never-ending attack
Once a company realizes it has suffered a cyberattack, the worst of its woes may have just begun. Enterprise and customer data alike can remain at risk long after the initial infiltration has been launched. CBS pointed to Javelin Strategy and Research studies ongoing since 2006 that examined the growing concern about fraudulent activity occurring following a breach. While only 10 percent of stolen information was utilized by cybercriminals three years ago, the chances of falling victim to fraud today have risen to one in three. If compromised data was financial in nature – either a debit or credit card number – there was a 46 percent chance that it would be used maliciously in 2013, according to the source.
The rise of social media and the human tendency to create one password for multiple online accounts were cited by CBS as two of the main reasons why cybercriminal activity is more likely to resemble the spreading of a cancer than an isolated, one-off attack. Because so much personal information is available on the myriad social applications found online, offenders can now easily track down the information required to break through the knowledge-based security systems that have become industry standard for most websites protecting vital user data. Knowing the name of a pet or best friend can be used as ammunition to wage more devastating cyberattacks and can lead to a complete identity takeover for those who are not cautious.
Undetected malware persists
Cyberthreats today are so dangerously powerful because they are more non-invasive and difficult to detect than the traditional data breach attempts that defined the early stages of the Internet. A recent article from PC World looked into exploitative files that forced Adobe to patch its Flash software in an effort to protect user log-in data. While this threat was successfully neutralized, cybercriminals are constantly evolving their techniques to make the next move.
Security breaches are quickly moving from unwanted possibilities to realities in the health care landscape as organizations continue to struggle when combating next-generation risks. Specifically, security-related incidents, data loss and unexpected outages cost U.S. hospitals approximately $1.6 billion per year, with exposure expenses exceeding $810,000 per incident.
These were some of the findings from a recent MeriTalk and EMC Corporation report, which found that roughly 19 percent of health care organizations have experienced a data breach within the past year. Although a variety of occurrences have contributed to this statistic, including the growing presence of malware, the loss of critical equipment and negligent user error, the fact remains that hospitals need to improve their security stance if they want to keep patient trust high and avoid unnecessary damage caused by failing to achieve compliance.
Fortunately, most decision-makers are aware of the problems they are having and plan to make changes to improve their defensive strategies and resiliency. According to the report, roughly 55 percent of health care organizations intend to improve encryption and general data protection, while 54 percent plan on complying with robust risk analyses required by compliance regulations. Another 44 percent of executives said they will boost breach detection and prevention technologies.
"Health care organizations are making significant IT investments to transform IT infrastructure and ensure that patient information is secure, protected, and highly available," said Scott Filion, general manager of global health care at EMC Corporation. "Trust has become a board-level business priority."
A separate Health IT Security report highlighted the importance of building a top-down risk approach to improve user awareness and cut down on negligent errors. Advanced training programs can also help employees learn more about the threat landscape and best practices needed to avoid unnecessary challenges.
Health care firms that want to improve their long-term resiliency need to address security challenges, as failing to do so will impair brand and patient trust.
The power, efficiency and life span of a company's data centers each play a big role in determining the overall effectiveness of an IT strategy. From the ground up, businesses need to implement their storage in a way that will give users fast access to information while remaining stable and resistant to network threats. But while IT security concerns have received the spotlight in recent news reports in the wake of the Target and Neiman Marcus data breaches, IT leaders cannot overlook the importance of maintaining company technology assets as a preventive measure against network shut-downs, poor performance and the yearly wear and tear of computer systems.
Even basic prevention counts
For many IT professionals, preventive maintenance is one of the more straightforward methods of keeping a database functioning at its highest level. Typically, system manuals will include a prescribed list of tasks that need to be addressed on a regular basis to ensure the reliability of the product. While these duties may seem simple, they can be easily overlooked by IT departments trying to focus their efforts on more complex issues that may be plaguing a datacenter. Jeff O'Brien, a product specialist and blogger at Maintenance Assistant Inc, recently gave business leaders reasons why they should never forget preventive maintenance, even if it seems redundant or systems seem to be operating well.
O'Brien explained that taking proactive measures to mitigate datacenter weaknesses is key for avoiding the loss of productivity or vital information that can occur when systems shut down without warning. Even a few hours of database inactivity can cost a company in many ways, not to mention the extra time and money spent on recovering data and repairing any damaged network systems. Closely monitoring database performance and is an important step in predicting problems that might pop up for IT departments because it can help to prioritize preventive maintenance procedures. Included in the article were some of the most commonly performed preventive tasks for datacenter maintenance staff:
- Replace HVAC filters
- Check CRAC temperatures
- Verify oil levels in compressors
- Lubricate moving parts such as bearings
- Verify backup generator is operating correctly
- Inspect fire safety systems
Making the leap to the cloud
Maintaining databases effectively and regularly can be a constant challenge for any IT department, which is main part of the reason why cloud based computing services are growing in popularity so quickly, according to ZDNet. Businesses will have to weigh their options before changing their database solutions and make the best choice for their requirements.
While the protection of internal data such as employee records, financial information and company secrets has to rank high in companies' lists of IT security concerns, customer credit card numbers have to be prioritized, considering the high-profile breaches that have lately cluttered headlines. Recently, retailers have been especially prone to having their customers' information compromised, as a Reuters report revealed that malicious software known as "ChewBacca" was implemented by a cybercriminal ring, targeting several brands and stealing data from 49,000 payment cards, according to cybersecurity research firm RSA FirstWatch.
Large scale attack victimizes many
The report revealed that small retailers in 11 countries including the United States, Canada, Russia and Australia fell victim to breaches that exposed customer data via malware that preyed on vulnerable point-of-sale systems. After an extensive investigation that tracked the two month attack on retailers around the world, RSA FirstWatch researchers revealed on their website that the illegal operation was shut down only after hackers had stolen financial information from over 24 million customers. While the threat has been neutralized for the time being, the malicious ChewBacca software may be utilized in the future if firms do not take active measures to secure their POS systems.
"The end game is to gain credit card information, so the hackers are going to go wherever it is easiest to get that information," Will Gragido, senior manager with RSA FirstWatch told the news source.
Gragido and his team, the threat research arm of RSA Security and subsidiary of storage conglomerate EMC Corp., informed the FBI of the details surrounding the ChewBacca breach, namely the location of the operation where command-and-control servers were being used by hackers to infect point-of-sale systems. RSA would not reveal any further information about the victims or culprits and the FBI reportedly could not be reached for comment, according to Reuters.
Point-of-sale systems still at risk
IT departments may have their hands full with optimizing company networks and securing databases, but point-of-sale units still represent a serious vulnerability for retailers that fear malware such as the ChewBacca Trojan, according to PC World. The source was in correspondence with Marco Preuss, director of Kaspersky's Global Research and Analysis Team in Europe, who explained that although the software is not complex or difficult to detect, cybercriminals switched servers regularly which made their operations progressively harder to pinpoint over time.
I have been working as a DBA for RDX (Remote DBA Experts) for a few years now. At this point in my career, I believe it’s time to start giving back to the SQL community. I have commonly seen many misunderstand how the Lookup task in SSIS actually works. The Lookup task does work as a join method. However, the Lookup task actually is going to pull back only the top matching record from the defined Lookup task in your SSIS package.
When is it the right time to use a Lookup task? The Lookup task is best when dealing with either a one-to-one or many-to-one relationship between tables. Today, I want to give a deep dive into the Lookup task and explain what’s actually happening behind the scenes. To start, we will use the AdventureWorks database that is provided by Microsoft.
Let’s start with two tables, Person.Person and HumanResources.EmployeeDepartmentHistory. I will isolate the records down to one Person to better depict the results. Our first query does a search on the database and filters down to Rob Walters:
From the image above, we can see that Rob Walters has 1 record in the Person table with a BusinessEntityID of 4. Next, let’s take a look at how many records Rob Walters has in the EmployeeDepartmentHistory table.
From here, we can see that Rob Walters has two entries where it shows he has been in two different departments over time. Let’s look at every different type of join from the T-SQL aspect of joining the two tables.
Using just a regular join, the results are as follows:
We get back what we are expecting. GREAT!!!
Just as a quick proof of concept, let’s try all of the common joins used in T-SQL and see what the results bring back:
Each result set has 2 records. The takeaway here is that joining the two tables results in a one-to-many relationship.
Let’s take a look at it in SSIS using the Lookup task. I will start by using the Person table in which we now know contains only one record. Next, I will look up against the Department History table in which we know has two records.
Our result is one record from the Person table. Additionally, the Lookup only brings back one record from the Department History table.
The result is one record, and we know that this result cannot be correct. The Lookup task did a one-to-one relationship, but we were expecting a one-to-many relationship.
Now, I will flip the roles of the Lookup and start with the table that has more than one result, Department History. I’ll join it against the table that has one record to which the Lookup needs to relate directly, the Person table.
Now this looks correct!
The Lookup task can be used to duplicate a join operation in T-SQL; however, it’s important that you understand the underlining table structures. There are tasks out there you can use to join two or more tables with many-to-many relationships. However, in SSIS, the Lookup task should not be used to look at tables that have a one-to-many or many-to-many relationship. The Lookup task, as developed, will only look for one matching records in the defined table of the Lookup task.
Thank you for reading my first RDX blog post. Stay tuned for my next post about the merge join transformation task in the coming weeks!
IT departments these days are often just as, if not more, concerned with the security measures taken to protect their company's data than they are optimizing their databases for speed and efficiency. With the increasing complexity and frequency of cyberattacks over the past few years, this pattern is certainly warranted. The consequences of a high-profile breach can not only compromise vital company data and the private information of employees and clients, but also severely damage the reputation of an organization, making it difficult to recover. Government spying is also a concern, and business leaders want to ensure that their operations remain in total privacy.
Subduing client fears
Companies want to know that they are getting the best protection measures available when they sign on to implement a new database for their business operations. Because quality security must be adaptable and able to keep up with the evolving threats of the IT world, leaders often determine the suitability of a database service by its track record. According to CRN, this is the exact approach that Oracle CEO Larry Ellison used to recently convince Cloud World conference-goers in San Francisco that cloud customers had nothing to fear with regard to government spying.
"To the best of our knowledge, an Oracle database hasn't been broken into in a couple of decades, by anybody," Ellison reportedly said. "It's so secure, there are people who complain. If you create an IBM database, you get to see the data," Ellison continued, explaining that Oracle's database does not allow visibility for administrators programming the systems. "You have a bunch of authority but you can't look at the data."
A track record speaks for itself
Ellison continued the conference by recounting Oracle's legacy of security, revealing that one of the company's first clients was the Central Intelligence Agency in the 1970s, according to ADT Mag. The company, then known as Software Development Laboratories, launched the database known as 'Oracle' and took that as its name in 1982. Ellison and his co-founders prided themselves on security from their modest beginnings and have continued to emphasize it through the years, as seen in the 2000s-era 'unbreakable' database promotional campaigns.
"Mr. Snowden never could have gotten into an Oracle database," Ellison stated, according to ADT Mag.
Oracle clients should feel confident in their database security but remain vigilant in today's uncertain cyber environment.
Big data is a crucial tool for any business looking to gain insight into the behavior of its customers, the effectiveness of its marketing campaigns and the performance of its employees. That is why it is so important to leverage database management services that ensure the protection, efficiency and power of a company's data. Few things can be more detrimental to business operations than a technical failure that leaves customers and workers unable to access the resources they need, and making database quality a priority is the best way that IT leaders can minimize the chance of these losses. This is not to say that businesses should only be working to maintain the capabilities of their storage solutions. They should also be actively pursuing new ways to optimize their systems to enrich their big data strategies.
Advancements in efficiency
While IT departments are always looking better their database programming to get the most out of their current hardware, storage systems are ultimately limited by the raw power of the servers they rely on. According to MIT News correspondent Helen Knight, storage methods specifically designed to house and access big data are in development with the hope of boosting the speed and efficiency of analytics processing to get users results faster than ever. Sang-Woo Jun, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT explained why current models of storage architecture are becoming outdated.
"Storing data over a network is slow because there is a significant additional time delay in managing data access across multiple machines in both software and hardware," Jun told the news source. "And if the data does not fit in DRAM, you have to go to secondary storage – hard disks, possibly connected over a network – which is very slow indeed." Jun is working alongside other CSAIL grad students and professors to speed up access times for big data storage based on a network of flash-based devices.
Big data matters now more than ever
Improving the speed and accessibility of big data storage methods is at the front of many IT minds, and for good reason. According to eWEEK, the way that companies handle their big data analytics and storage solutions is likely going to determine their success in 2014. With more data accumulating by the minute, high-speed analytics is far more than a passing trend.
In the world of IT, database efficiency has become more crucial year after year as organizations seek to store, track and analyze massive streams of information to better their operations. Countless options exist for decision-makers who need to address their data concerns with a comprehensive solution, but at the forefront of the industry is Oracle, whose recent release of Database 12c is allowing IT departments to make storage and related applications easier to use and faster than ever. The University of Minnesota is one of many organizations benefiting from the high-performance and intuitive user interface of these solutions, as highlighted in a recent article from Forbes.
A testament to superior efficiency
While many firms are still used to installing, programming and managing their own in-house servers to address their storage needs, Forbes explained that cloud based databases are not only faster and more efficient to implement, but also more simple to update and scale to any change in requirements. IT departments can now focus more on optimizing their operations and mitigating IT security concerns rather than wasting hours trying to repair databases or recover information that has been lost due to a crash or network outage. Andy Wattenhofer, manager of database administration with University of Minnesota's Office of IT, revealed how database-as-a-service (DBaaS) solutions have changed the way his department approaches their storage and access needs.
"Then we'll take about five minutes to provision a database service for this application, with a user name and password," Wattenhofer told the news source. "The database piece has become really streamlined for that. The amount of work we do has become almost trivial. It doesn't really matter what size they are; it still fits into this one database model. That's pretty much what I'm looking at - further efficiencies and better performance and availability, while reducing administration overhead."
What this means for private enterprise
DBaaS presents a clear set of advantages for large organizations such as universities and hospitals, but Forbes noted that the private sector will also benefit these advancements in a big way. A wide range of business applications can be supported by DBaaS, including Oracle PeopleSoft for HR, financial and administrative tasks and website optimization. Of course, Oracle is not stopping there. A recent blog post from Automated Trader cited an upcoming conference dedicated to explaining the benefits of running Oracle Storage on Oracle Database to deliver optimal speed and capacity. At this rate, there seems to be no stopping this industry giant.
With more cloud computing options available now than ever before, more companies are allowing their employees to work from home, on the road or at the office with their own personal technology. While IT security concerns surrounding the use of mobile devices have been a hotly discussed topic, the protection of laptop data has not received as much attention due to the assumption that portable computers share the same defenses as their desktop counterparts. In reality, however, personal laptops still represent a major vulnerability to companies that offer bring-your-own-device (BYOD) policies and should be a top priority for IT leaders as they strategize the best approaches for network security.
Blurring lines between work and personal data
The appeal of remote access to company data is clear for anyone who has been commuting back and forth from an office for their entire life, but the potential damages that a business can suffer with sloppy BYOD practices can outweigh these benefits by a long shot. Laptops not only increase the risk of computers being stolen and giving strangers access to an individual's personal information, but many workers tend to download sensitive company data onto their portable devices without thinking twice about the detrimental consequences.
A recent report from NJ Spotlight highlighted an incident that occurred at the Horizon Blue Cross Blue Shield of New Jersey in which 839,711 Horizon members' personal information was exposed after two employee laptops were stolen. Although these devices did not technically belong to the organization, they were carrying Social Security numbers and private medical data that had been stored on the hard drives and left unprotected by Blue Cross Blue Shield's network security measures. Thankfully for the members whose information was stolen, there has yet to be a reported occurrence of identity theft using the compromised data. Nevertheless, careless employee practices such as these set a poor precedent for the reputation of any organization.
Increasing employee awareness
To avoid the repercussions of a malicious cyberattack, business leaders are advised to make BYOD policies crystal-clear to employees at all levels of the company. Workers should also be encouraged to beef up the security on their own devices to minimize the chances of someone breaking in. IT ProPortal recently provided an array of tips that can help anyone better protect their data. It is always worth going the extra mile to ensure that information remains safe.
With large-scale data breaches occurring more frequently each year, business leaders are beginning to realize that the latest advancements in cyber security might not be enough to protect them from hackers. The Target attack of late 2013 served as a leading example of the current state of affairs with regard to network security, forcing IT professionals and executives around the world to reassess the way that they look at their protection measures. Despite more money being pumped into cybersecurity daily, major retailers and banks are still not safe from the advancements made by attackers in recent years.
The latest in cyber woes
The story of struggling IT security may have reached a climax late last year, but the issues have continued into 2014 and show no sign of stopping anytime soon. Shortly following the announcement of the Target attack that revealed vital information of over 40 million customers, retail giant Neiman Marcus fell victim to a similar breach that compromised the data of millions more. According to NBC News, it doesn't end there. Cybersecurity firm InterCrawler recently revealed that another six major merchants had their systems infected with the same malicious software that brought Target into the spotlight for exposing credit card numbers and personal details.
While no names or further details about the attacks have been revealed, InterCrawler executive Andrew Komarov reportedly alerted the Visa corporation, along with major banks and law enforcement that a series of breaches have occurred across the country. Due to the minimally invasive nature of this particular malware, these attacks are notoriously difficult to detect and may have been a problem for these companies long before they were informed by InterCrawler. The software, known as BlackPOS, targets point-of-sale systems and harvests random access memory to find traces of customer data.
Hackers get crafty
Banks and clothing merchants aren't the only companies in attackers' sights. Arts and crafts retailer Michaels has recently been alerted by bankers who detected patterns of fraud from credit cards used across its1,250 stores throughout the country, according to Krebs on Security. The report stated that the U.S. Secret Service is in the midst of an investigation, tracking purchases from Best Buy and Target that had not been corroborated by people reporting suspicious behavior on their bank statements. Michaels issued a statement addressing the situation and urged its customers to keep an eye on their accounts.
Management leaders already have a lot to handle between employee and client relationships, marketing and advertising efforts and optimizing internal business processes. IT security concerns used to rank low on the list of decision-makers' priorities, but they are getting more time in the spotlight due to recent developments involving the heavily-publicized data breaches of retail companies Target and Neiman Marcus at the end of 2013. This heightened focus on cyber security has been reflected in increasing IT budgets and the prioritization of network protection best practices in recent years.
Business and beyond
Experts know that improving cyber security is a necessary step in creating a better future for business, but cybercriminal motivations for an attack could be less obvious than one may think. While many attacks are launched with the specific intention of acquiring financial information or company secrets that could benefit hackers monetarily, there are often more complex reasons behind why big businesses are targeted. According NPR, motivations can range from simple fraud and theft to 'hacktivism' aimed at proving a political point. Government contractors are often targeted, requiring top-notch security measures to protect their information.
"If someone is breaching a defense contractor, it's probably at the behest of a foreign nation," Anup Ghosh, chief executive of cybersecurity firm Invincea, told the news source. "If someone is breaching the IMF, it's probably at the behest of a foreign nation looking for competitive [intelligence] on what's going on."
A nation at risk
Private, individual firms are the most common targets for cybercriminals because their security measures are often less developed than large government organizations. Nevertheless, cyber espionage is still a major problem for federal institutions with valuable information to conceal. A recent report from KeloLand highlighted the dangers that come with the nation's increasing reliance on digital infrastructure. Cyber-security advisor and former consultant with the U.S. Department of Defense Michael McNerney shared his thoughts on the country's vulnerabilities.
"We're becoming more and more dependent on the Internet and on cyber space, which I think has enormous benefits regarding productivity and information, but it also has a potential downside, as we've seen, it's a very vulnerable environment. The important thing to keep in mind is not just a question of who can do this, but also who would want to. So, would China or Russia want to shut off our power grid? I don't think so, that would mean war and that's not good for them either."
Regardless of international politics, cyber security will continue to play a major role in corporate IT strategies moving forward.