Information Assurance

The Streisand Effect

Author: Cody J., Inflow Software Engineer

The Streisand Effect is a term coined by Mike Masnick from the blog site Techdirt. It describes the fact that, when attempting to hush up something on the Internet, the worst thing you can do is file a lawsuit, or otherwise attempt to suppress the information. Something that might have been seen by a small number of people and been relevant only a few days is, with the lawsuit, more likely to become a big news article with thousands of people knowing about it. So, rather than letting the whole thing just blow over, getting legal involved makes the matter much more interesting and noticeable to the public.

This is relevant because when a “media crises” arises, some companies attempt to suppress the information to protect their market value or otherwise to do damage control. By getting heavy handed, a small issue that might have gone away in three days is now a major news story and will probably be talked about for weeks. In addition, the original audience might have been tens or hundreds of people but now, with the new coverage, thousands of people will know about it.

This is most frequently seen with intellectual property issues, primarily copyright. Frequently, companies will attempt to suppress information by filing copyright infringement lawsuits, which only causes the copyrighted material to be seen by more people. However, there are many times when the information being suppressed isn’t actually covered by copyright, or at least the aggrieved company doesn’t hold the copyright, which only makes the situation worse.

This is most important for companies in terms of information security because it only sheds more light on their security failures if they attempt to kill the messenger or otherwise hide security problems. Even if security failures are pointed out online, it makes more sense to fix those problems and announce it rather than attempting to make the original posts go away, especially if false or inaccurate legal measures are used.

Companies obviously need to be aware of their public image, but rather than making things worse by calling in the lawyers, sometimes it makes more sense to take a step back, take a deep breath, then consider the problem in a rational manner and move forward. Sometimes the first reaction to a problem is the worst one to make, and there are rarely any situations where a company has to make a decision immediately. It’s entirely possible that just waiting a few days will clear the air and make the problem, i.e. the original report, go away. However, the underlying issue still needs to be addressed, but that can be considered in a more objective manner during that time.

At Inflow we solve complex terror and criminal issues for the United States Government and their partners, by providing high quality and innovative solutions at the right price through the cultivation of a corporate culture dedicated to being #1 in employee and customer engagement. We Make it Matter, by putting people first! If you are interested in working for Inflow or partnering with us on future projects, contact us here

BYOD - Bring Your Own Device

Author: Cody J., Inflow Software Engineer

Organizations nowadays have to contend with a multitude of IT issues. With the lowering cost of personal electronics, especially smartphones, laptops, and tablets, people are buying a variety of devices to stay connected and, inevitably, will use at work. Companies need to decide how they will deal with this issue.

Bring Your Own Device (BYOD) is a controversial issue in some places, especially when information security is considered. Traditionally, companies provided employees with the information systems needed to do their work. Controlling the items that were used within the company allowed more control of security; only authorized devices were present on the network, and baselines were created for each system, making it easy to handle patching and other security issues.

BYOD allows workers to bring their personal devices to work and use them on the the company’s network. Some of the advantages include improved productivity (employees have already configured their personal systems to their individual tastes, so they aren’t fighting with unfamiliar or locked-down systems), improved morale (letting employees choose their own “best of breed” applications), and cost savings (companies don’t have to buy all these devices themselves).

Naturally, a significant disadvantage is in security. Because they are touching the company’s property (such as the network itself, servers, and associated systems), the company needs to ensure that these personal devices aren’t opening up security holes in the company’s InfoSec posture. Yet, relying on employees (with differing levels of “computer savviness”) to maintain security on their systems can be wishful thinking.

Here is a short, non-comprehensive list of security concerns with BYOD:

  • Not encrypting a personal device and subsequently losing it or having it stolen or hacked
  • Not wiping the data on it prior to selling/disposing of it
  • Allowing friends/family to use the device, thus exposing sensitive information to non-employees
  • Setting up sharing, such as through Dropbox or peer-to-peer networking, that inadvertently shares company information
  • Tracking network traffic or employee work needs to ensure only work-related information is monitored and no personal information is exposed
  • Ensuring that personal systems have the latest software patches and meet the minimum security baselines
  • The inevitable need for corporate help desk services to provide support to personal devices
  • increased corporate liability

Obviously, while there are advantages to having a BYOD policy, the security concerns can outweigh those advantages. Before a company decides to implement BYOD, it needs to consider all the pros and cons of the policy and address as many of the security concerns as possible. Eliminating and mitigating these concerns will reduce the likelihood of serious security violations, such as data breaches and malware infestations.

One alternative is to have the company purchase personal devices, e.g. providing a catalog of choices for employees, and allow personal customization in addition to the standard corporate configuration. This allows the company to maintain the same security criteria as with the traditional system but also provides employees with some ability to customize their devices.

This removes the cost benefit to BYOD but enhances the company’s security posture. While personal use is still allowed with the devices, they have to maintain a minimum security baseline to access the company’s network, they receive regular patches, and help desk support is more manageable. In addition, when an employee leaves the company, the device can be wiped by the company to ensure all proprietary data is removed.

With this company-owned, personally-enabled (COPE) policy, there is more leeway in how the devices are purchased. One option is to simply have the company buy the devices and allow employees to use them during their employment period. If an employee leaves, the device is returned to the company, as it is company property.

Alternatively, the company can allow the employee to purchase the item, either at the point of purchase or at the time the employee leaves the company. Thus, while the device is still controlled by the company, the employee gains personal use of the device when s/he leaves the company. Obviously, the company would still ensure the device is wiped before the employee leaves the company.

As can be seen, BYOD and COPE are different methods to deal with the desire of employees to use their own devices at work. There are a number of security concerns to be cognizant of when implementing any policy that allows personal devices, whether or not they are purchased by the company. Planning for these policies needs to include a number of stakeholders, and consider all the pros and cons of allowing personal devices.

At Inflow we solve complex terror and criminal issues for the United States Government and their partners, by providing high quality and innovative solutions at the right price through the cultivation of a corporate culture dedicated to being #1 in employee and customer engagement. We Make it Matter, by putting people first! If you are interested in working for Inflow or partnering with us on future projects, contact us here

Online Privacy

Author: Cody J., Software Engineer

When most people envision an engineer Privacy online can be difficult to achieve, at least, if you want full access to the Internet. Most websites track users for a variety of reasons, ranging from curiosity to ad placement. The most basic tracking techniques just look at where a visitor is located (based on the user's IP address), what type of operating system they use, and the browser type and resolution a visitor uses (useful when planning upgrades to a site).

Most website support themselves via advertising; the visitor "agrees" to see ads in exchange for free access to the site. However, a lot of people hate online ads due to the number that can  be found on websites, their annoying behavior (such as blinking text and animation), and the fact that ad servers can introduce malicious code onto visitors' computers.

Ad blocking is one way to help with online privacy. Essentially, ad blocking software looks for areas on a web page that are designated for advertising and prevents those areas from being displayed. They also look for communications to known advertising domains, e.g., and prevent the data transfer between the advertiser and the website.

The privacy aspect comes from preventing advertisers from tracking a user's surfing habits. Cookies, web bugs, and other software tools can be utilized to track users as they move from website to website. Over time, this  tracking can be used to create a useful profile of an individual, to the point of knowing when someone is pregnant and offering pregnancy-related coupons.

Most people are aware of the tracking that Google performs on people who use Google's services. Considering how many online services Google provides, it is nearly impossible to avoid Google when using the Internet. While you can opt-out of Google's tracking for many of the services, there are still ways for Google to track you.

Since Google is the most used search engine (to the point where "to Google" is now a verb), the easiest way to prevent your online activities being tracked is to use a search engine that doesn't record information about your searches. DuckDuckGo (DDG) is one such search engine. DDG doesn't collect or share personal information. This means that your searches not only are not tied to your IP or User Agent information,  but the search terms themselves aren't provided to other websites.

Obviously, for some people, this can be a vital concern when conducting online searches. If a person has a medical condition and searches online for more information, there is a possibility that the searches can be tied to that individual. Even if personally identifiable information (like name and address) is removed from the records, it is still possible to identify a person via metadata.

Another common Google service is Gmail. In case you didn't know it, Google scans all emails that move through Gmail for advertising and spam-detection purposes. This occurs even if only one person in an email chain is using Gmail. There are certain caveats to this, but, generally speaking, your email is scanned by a computer to help deliver relevant ads (relevant to the email's subject matter) to you.

Finally, you can "hide" a large portion of your online presence by using Tor. Tor stands for The Onion Router and is a protocol designed to mask yourself from online traffic analysis. Traffic analysis is simply identifying the source and destination IP addresses. Even something as simple as this information can cause problems, from price discrimination (like an Australian paying more for an online purchase) to safety issues (if someone is able to identify that you are connecting to a foreign agency's computer system).

Tor works by creating an indirect link between you and your destination site. A number of different relay points are set up, with each relay point only knowing the immediate connections it has; no relay knows the entire path. Each relay connection is encrypted, so no relay is able to trace these connections. In addition, after a connection has been terminated for 10 minutes, any subsequent connection uses a new route, so even if someone was looking, they wouldn't be able to link current actions to previous ones.

Tor is usable with a variety of tools, such as web browsing and instant messaging, but users have to be conscious of what they do on the Internet to avoid revealing themselves. Providing your name on a web form nullifies any anonymity you had. There are a number of weaknesses within Tor, but, overall, the protocol is useful for anyone wanting to have more online privacy.

To summarize, online privacy is an ongoing battle, which some people may care about more than others. There are a number of things that can be done to improve privacy, but it is a trade-off between the benefits of using the Internet and how much information is released.

At Inflow we solve complex terror and criminal issues for the United States Government and their partners, by providing high quality and innovative solutions at the right price through the cultivation of a corporate culture dedicated to being #1 in employee and customer engagement. We Make it Matter, by putting people first! If you are interested in working for Inflow or partnering with us on future projects, contact us here


Author: Cody J., Inflow Software Engineer

Encryption is an important part of an information security posture, arguably the most important. Often, it is the last resort to preventing data theft, as in the case of a stolen laptop or a hacker breaking into a server.

At its most basic, encryption is pretty simple: You take some information, use a “key” to encode it, and out comes gibberish. The Greeks used a wooden rod as the key, wrapping paper or other media around the rod in a spiral and then writing along the length of it. When unraveled, the writing would be meaningless; only with a rod of the same diameter could the message be read “in the clear”.

With the advent of computers, the mechanism has changed but the principle is the same. A computer encrypts data using a key and a key is needed to decrypt it. There are two main forms of encryption nowadays: symmetric key and asymmetric key. Symmetric key encryption uses the same key for both encryption and decryption, while asymmetric key encryption uses two different keys (one public and one private); this is why asymmetric encryption is sometimes called “public key encryption”.

In addition, encryption can be performed with streaming data, e.g. network connections, where the data is encrypted one byte at a time or it can be done with static data, e.g. on a hard drive, where the data is encrypted in blocks composed of a certain number of bits.

When a key is applied to data, the data is changed in a specific way, but, unlike hashing algorithms, the data can be recovered by re-applying the key in the reverse direction. Hashing algorithms work like encryption, i.e. the data is gibberish after the conversion, but hashed data cannot be recovered in a reverse fashion.

In symmetric encryption, the same key is used on both ends. This means the encryption/decryption process is very fast but it also means if someone unauthorized gains the key, they can encrypt/decrypt data just as easily. Not only will the intruder know all the secrets that were encrypted, but they can make their own encrypted data and pretend they are an authorized user.

An early example of symmetric encryption is known as Caesar's Cipher. Julius Caesar used it to communicate with his generals and other correspondents. Essentially, a plain-text message was scrambled by shifting the letters a given number of positions. For example, “A” now becomes “E”, “B” becomes “F”, and so on. As long as the person receiving the message knows the direction and number of positions to shift, the message can be decoded.

Asymmetric encryption uses two different keys, a public and a private. The private key is kept secret by one party but the public key is given to everyone. Encryption and decryption can still occur but both keys are required. The private key can encrypt a message but only the public key can decrypt it and vice versa. This allows a user to send and receive encrypted information with anyone who has the public key.

Probably the best example of this is with encrypted email. A sender can use the recipient’s public key to encrypt an email; only the recipient will be able to decrypt it and read the contents. If the recipient wants to send a reply, the email is encrypted with the private key and the receiver uses the sender’s public key to decrypt it.

Digital certificates are based on public key encryption. They are hosted by an independent third-party (a certificate authority, or CA) to organizations for use with computer systems, such as web servers. The CA acts as a trusted clearinghouse for digital certificates; the CA issues a company a public key certificate, while end users have a CA-issued certificate in their browser. When the user attempts to connect to the company, the user’s computer compares the CA certificate on the server with the one in the browser. If they match, then an encrypted session is started; otherwise, the user is notified that the certificates don’t match and the session won’t be encrypted.

Digital certificates essentially say that an organization is trusted by the certificate authority to be legitimate, as the organization had to prove itself to the certificate authority. The CA acts as a middleman between the organization and a user, providing the public key necessary to establish a secure, encrypted connection between the two.

This prevents individuals from having to set up their own database of certificates whenever they want to have an encrypted session with a company. Web browsers are provided with a large number of CA certificates built-in; without the CA’s, each user would have to maintain the public keys for every entity they wanted to have an encrypted session with.

In essence, the CA provides the user with the public key for a company and the company holds the private key. If the public key changes, like it expired, the CA certificate is the only one that is changed. Users will receive the new key when they initiate a new encrypted session. The company doesn’t have to send out new keys to every single user every time.

A similar process occurs with digital signatures. An electronic device is associated with an individual, often a smart card, and that person has to prove their identity to the card’s issuer. The issuer creates a variety of digital certificates on the card (for public key encryption), as well as another certificate that is used as a digital signature.

Essentially, when a document or email is digitally signed, the file is hashed with the individual’s digital signature certificate, creating a unique hash value. Upon receipt, the receiver can confirm whether the signature certificate used to “sign” the file is legitimate or not. This provides authentication (knowing who the sender is), non-repudiation (only the legitimate sender could sign it), and integrity (any modifications to the file or digital signature certificate would invalidate the unique hash).

In summary, encryption is a vital part of computing nowadays. Like a phone, it can be used for good things as well as bad things. However, just like a phone, the odds of encryption being used for bad things are incredibly low compared to the number of transactions occurring each day.

Encryption should be used whenever possible, as it provides a measure of safety in case the bad guys get through your other security measures. That said, encryption keys need to be safeguarded; having an encrypted iPhone is worthless if someone knows your PIN to unlock it.


At Inflow we solve complex terror and criminal issues for the United States Government and their partners, by providing high quality and innovative solutions at the right price through the cultivation of a corporate culture dedicated to being #1 in employee and customer engagement. We Make it Matter, by putting people first! If you are interested in working for Inflow or partnering with us on future projects, contact us here

Programming for Managers

Author: Cody J., Inflow Software Engineer

Should everyone know how to program? To the extent of having the equivalent of a computer science degree, no. However, a basic level of knowledge will serve many people well, especially managers.

This idea isn’t new; the Huffington Post wrote about it a few years ago, and others wrote about it even earlier. The main reason that people should understand the basics of programming is because of the invasiveness of computers; everything is computerized nowadays and will only increase, especially as devices are connected to the Internet.

Being computer illiterate should not be considered a badge of honor; too many people, from doctors to politicians and beyond, seem to take pride in not knowing the basic functionality of computers. While writing full-blown applications isn’t something everyone needs to know, managers (especially in the tech world), should know the basics. That way, they can feel more comfortable when talking to the “tech-heads”, as well as being able to have a better understanding when developers talk about how hard something is and how much time it will take.

So, what should managers know about code? Here’s a brief list of concepts that will at least help them understand the lingo:

      Data structures: these are logical structures (typically) built into the programming language. They allow data to be organized in an efficient manner for ease of programming. Some languages have more primitive structures that can be used by programmers to make their own data structures.

Each language has its own data structure implementation, though they may be similar across different languages. For example, many languages have the capability of mapping a keyword to a value (each value can be retrieved by simply supplying the keyword); these structures may be called associative arrays, hash tables, dictionaries, etc., but they all provide the same capabilities.

      Conditional statements: Most frequently seen as “if/else” statements in code, conditionals allow the programmer to branch the code flow depending on certain conditions. For example, if “condition A” is true, then perform “action A”; else perform “action B”.

      Loops: Loops simply repeat a block of code a certain number of times until a set stop condition is reached. Once reached, control returns to the main program.

Loops can be set to repeat a certain number of times (count-controlled), to continue until a particular condition is met (condition-controlled), and (in some programming languages) loops can operate over a collection of values in a data structure (collection-controlled).

Infinite loops are generally considered a bad programming practice, as they will run forever until forced to stop, typically by user intervention. However, there are times when infinite loops are necessary for operation, such as an event-monitoring program that looks for and processes events (like an operating system), or an interactive program that continually waits for user input before doing something.

      Exceptions: Exception handling is simply dealing with errors in code. When a program acts in an unexpected manner, an error is generated, i.e. an exception to the normal program flow. Programmers can add code to a program to catch these exceptions, allowing the program to continue functioning without crashing (hopefully).

Obviously, not every anomalous condition can be accounted for (which can result in program and system crashes), but programmers work hard to identify as many error-causing conditions as possible. Frequently, these errors are automatically captured by the software and sent back to the programmers. These notifications are used to create new patches for the software.

      Functions: Sometimes called subroutines, functions are frequently the workhorses of programs. Essentially, they branch off from the normal logic flow in a program to perform specific actions, then return control back to the main program flow. Functions are simply containers of code-logic that can be used multiple times within a program, simply by using the function’s name.

The main benefit is that, without functions, you have to copy and paste the same code blocks multiple times to make your program work, i.e. your program runs from start to finish in a straight line, so to repeat a particular action, you’ll have to manually repeat it every time. If you ever have to revise the program, going to one location to make the change is much easier than finding all the different locations if you copy and pasted it.

When used in classes (explained below), functions are called methods to signify it is being used with objects.

      Classes: Object-oriented programming (OOP) is one of the most common paradigms currently used in programming. Most OOP languages uses classes as their main data structure.

The main idea behind classes is that they can be used to create a generic base class, which can then be subclassed to create specific objects. For example, you could have a generic class of Vehicle. This base class has parameters that define a vehicle, e.g. the number of wheels, the type of frame, number of doors, etc. Using this Vehicle class, you could make subclasses that inherit from Vehicle, such as Bicycle, Car, Pickup Truck, Airplane, etc.

When you make a subclass, you simply provide the details for the new object as input values for the generic parameters. For example, for a Car subclass, you give it “4 wheels”, “4 doors”, “steel frame”, etc. For a Bicycle, you give it “2 wheels”, “0 doors”, “aluminum frame”, etc.

The idea behind classes is that they can be easily reused or, via inheritance, they can be extended into specific subclasses. They also let you model real-world items more accurately. In short, a class lets you create a basic template and default behavior for something, when can then be used as a foundation for more specific changes.

      Unit tests: Testing of software is an important part of quality assurance, ensuring that the application is ready for use without any significant bugs. Unit testing is designed to create a test situation for each block of code; the code block could be an entire program, but is more often the components of the program, such as a function or a class.

A variety of techniques are used to create unit tests and, ideally, each unit test is separate from the others to ensure modularity of the code blocks. The benefits of testing are: detection of problems early in the lifecycle, ease of future modifications and integration into larger programs, and support of software documentation and design.

There are obviously a number of other topics to be covered in programming, some more advanced than others. Hopefully, however, this is enough to give non-programmers some insight into what is involved in writing software and why it may take longer to get a project finished. There are a number of tutorials available on the Internet, as well as boot camps and actual academic degrees, but understanding the basics should make the life of a manager easier. Programming can be helpful in automating a number of tasks, which, in itself, should encourage people to pick up the basics so they can get to the more enjoyable parts of life.

At Inflow we solve complex terror and criminal issues for the United States Government and their partners, by providing high quality and innovative solutions at the right price through the cultivation of a corporate culture dedicated to being #1 in employee and customer engagement. We Make it Matter, by putting people first! If you are interested in working for Inflow or partnering with us on future projects, contact us here

Defining Information Security

Author: Cody J., Inflow Software Engineer

People outside the federal government may have never heard about information assurance. They may be more familiar with the term “information security” or, more popular nowadays, “cyber security”. So what’s the difference, if any, between these terms?

As defined by Wikipedia, information assurance (IA) is:

“[T]he practice of assuring information and managing risks related to the use, processing, storage, and transmission of information or data and the systems and processes used for those purposes... While focused predominantly on information in digital form, the full range of IA encompasses not only digital but also analog or physical form...

Information security (InfoSec) is much easier to define:

“[T]he practice of defending information from unauthorized access, use, disclosure, disruption, modification, inspection, recording or destruction. It is a general term that can be used regardless of the form the data may take (e.g. electronic, physical).”

Cyber security is very similar:

“[T]he protection of information systems from theft or damage to the hardware, the software, and to the information on them, as well as from disruption or misdirection of the services they provide.”

I like to think of these terms as building on one another, like a pyramid. IA is the foundation, since it builds upon InfoSec by including the business side of security operations. It focuses more on strategic risk management and the administrative side of security, such as compliance, auditing, continuity of operations, etc.

The comprehensive nature of IA is beneficial because it provides a methodology for security systems and networks. Numerous checklists and supporting documentation is available from NIST (National Institute for Standards and Technology), such as the NIST Special Publication 800-53 : Security and Privacy Controls for Federal Information Systems and Organizations.

By using NIST, CoBIT, PCI DSS, and other standards frameworks, companies don’t have to make their own security policies and procedures; they can simply take what has already been created and modify them for their own purposes. This alleviates a lot of redundant work, as well as ensuring that critical pieces aren’t missed.

The key piece of IA is the security triad: Confidentiality, Integrity, and Availability. 

  • Confidentiality is ensuring data is kept away from unauthorized users.
  • Integrity is ensuring the data isn't maliciously or accidentally changed.
  • Availability is simply ensuring the information is available when needed.

Other aspects are non-repudiation (the person who appears to have generated the data is, indeed, the actual author), and authentication (verifying that a user is actually allowed to access the data they are trying to reach).

If IA is considered the strategic, administrative side of security, than InfoSec can be considered the technical, tactical side. InfoSec generally covers the more day-to-day security operations that a company may perform: vulnerability scanning and patching, firewall and ACL configurations, deployment of IDS and IPS, etc.

You could almost consider IA to be like having a Ph.D. in Computer Science, whereas InfoSec is an MS degree. IA doesn’t necessarily play a factor in day-to-day operations, but is looking at the “big picture”, looking at changes in the different fields that support information security and coming up with “research” that leads to evolutionary changes in security practices. InfoSec is more “hands-on”, taking the knowledge generated by IA and figuring out how to put it into practice.

While IA may say a good strategy is to establish and comply with company security policies, InfoSec is responsible for figuring out what that means. It could be the development of daily, weekly, and monthly security checks, including daily checks of system logs, weekly software patching, monthly audits of user accounts, and annual penetration testing.

Moving onto cyber security, it is a more focused aspect of InfoSec, as it deals with networked information systems, especially if they have access to the Internet. While InfoSec also considers physical security and portable devices, like thumb drives, that can pose threats to information systems, cyber security is more concerned with network-based threats.

With the interconnected nature of most computing devices nowadays, cyber security is the “new hotness” in terms of academic degrees and certifications. Realistically, however, it's nothing new; it’s just a new face on InfoSec. The threats haven’t really changed, and counteracting them is much as it always has been.

One of the reasons why “cyber” is the new buzzword is because of the establishment of military cyber commands and the federal government’s creation of a number of “Cyber Czars” and continuous talk of cyber warfare. While Internet-based attacks from foreign countries is on the rise, little has changed in how security is implemented over a network connection.

In short, Information Assurance is a high level view of security, looking at administrative policies and guidelines for security implementation. Information security takes the ideas generated by IA and develops operational security capabilities that can be put into standard operating procedures for routine and emergency work. Cyber security is basically information security that focuses on networked equipment. While cyber security still encompasses the principles of IA and InfoSec, it really isn’t too concerned with things that are outside the computer case. Most of the security aspects covered are related to web-based attacks, denial of service, authentication attacks, etc.

Despite that there is in fact a difference between each of these, most people consider all of these terms to be synonymous with the other. Most people don’t have significant experience with information system security to understand the difference, so they will use whatever term comes to mind, which, nowadays, is “cyber security.”


At Inflow we solve complex terror and criminal issues for the United States Government and their partners, by providing high quality and innovative solutions at the right price through the cultivation of a corporate culture dedicated to being #1 in employee and customer engagement. We Make it Matter, by putting people first! If you are interested in working for Inflow or partnering with us on future projects, contact us here