Sunday, August 13, 2023

Creating Disks Without Disks

 If you are playing around with disk forensics and want to look at tools like SleuthKit and Autopsy, along with other commercial tools, you need disk images to play with. While it's easy enough to get external disks to tinker with, you can also create virtual disks in Linux that will give you an image without having to generate one from a real disk. The first thing you need to do is to generate a file. We can use the disk dump command in Linux, dd, to do that. We're going to use a pseudodevice that generates random numbers, /dev/urandom, though you could also easily use /dev/zero. 

kilroy@badmilo ~ $ dd if=/dev/urandom of=mydisk.img bs=1M count=128

128+0 records in

128+0 records out

134217728 bytes (134 MB, 128 MiB) copied, 0.269739 s, 498 MB/s

kilroy@badmilo ~ $ 

Once we have a file, we can partition it just as you would an actual device. While you can use any disk partitioning program you want, you can see the use of fdisk here. 

kilroy@badmilo ~ $ fdisk mydisk.img 

Welcome to fdisk (util-linux 2.39.1).                                        

Changes will remain in memory only, until you decide to write them.          

Be careful before using the write command.

Device does not contain a recognized partition table.

Created a new DOS (MBR) disklabel with disk identifier 0x1ecaf971.

Command (m for help): n

Partition type

   p   primary (0 primary, 0 extended, 4 free)

   e   extended (container for logical partitions)

Select (default p): p

Partition number (1-4, default 1): 

First sector (2048-262143, default 2048): 

Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-262143, default 262143): 

Created a new partition 1 of type 'Linux' and of size 127 MiB.

Command (m for help): w

The partition table has been altered.

Syncing disks.

You'll want to pay attention to the location of the first sector. The program fdisk defaults the first partition to start at the 2048th block. We're going to use that information shortly, since we need to format the partition. When you format a disk, if you don't provide an offset, it will write over the first blocks on the disk, virtual or otherwise. Writing over the first blocks blows away the partition table, which means you don't have any partitions after you have formatted the disk. We just want to format a partition and not an entire disk. Here, we are going to format the disk as an EXT4 partition and provide the starting block for the partition. 

kilroy@badmilo ~ $ mkfs.ext4 mydisk.img -E offset=$((2048 * 512))
mke2fs 1.47.0 (5-Feb-2023)

Warning: offset specified without an explicit file system size.
Creating a file system with 130048 blocks but this might
not be what you want.

Discarding device blocks: done                            
Creating filesystem with 130048 1k blocks and 32512 inodes
Filesystem UUID: eb9e3f13-66c8-4fe8-9922-69b32ded088c
Superblock backups stored on blocks: 
        8193, 24577, 40961, 57345, 73729

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done 

While you could provide the actual byte offset, in this case we are calculating the byte offset. There are 512 bytes in a block and the block offset was 2048 so we multiply one by the other to figure out the actual byte location on the disk. Because everything we are doing so far has been on files I own, there has been no need to elevate privileges. What we do next, however, will require privilege elevation. We're going to mount the disk. 

In this case, I've created a mountpoint, which is an empty directory, in my home directory to mount to. We're going to loopback mount, which is a way of mounting a file as though it were a block device, like an actual hard disk. A file is a character device, but we can't mount character devices and use them as though they were disks. So, we use a loopback mount and the file acts as though it's an actual disk. 

kilroy@badmilo ~ $ sudo mount -o loop -o offset=$((2048*512)) mydisk.img mnt

Now we have our file mounted as though it were a disk, we can use it as though it were a disk. You can copy files to and from, though since it's a new disk there is nothing on it to copy from at this point. 

Once you have this mounted device, you can use any forensics tools on it you like. Or just use it as a way to store files in a single place that can be copied around or compressed. As it's just a file, maybe no one will think to check it for actual content. The giveaway will be if someone uses the file utility on it, since it looks like a boot record to file, as you can see here. 

kilroy@badmilo ~ $ file mydisk.img
mydisk.img: DOS/MBR boot sector; partition 1 : ID=0x83, start-CHS (0x0,32,33), end-CHS (0x10,81,1), startsector 2048, 260096 sectors, extended partition table (last)


Friday, September 28, 2018

VNC Over AWS

If you followed the instructions from the last post, you have a Kali instance running in AWS. The problem is that you are limited to SSH access, which is the management protocol allowed by default through the AWS security groups. You really want to be able to get GUI access so you can run the pretty tools. Well, there are a couple of ways to do that. One way is a bit more complicated, though it doesn’t involve adding rules to your security group. It requires that you install an X server on your local desktop and then turn on X11 forwarding through your SSH session. If you are using PuTTY, this is fairly simple. Getting an X-server isn’t very complex. Xming works pretty well, though there are others. Ideally, if you enable X forwarding, your display host will be set to your X server on your local system so any program that requires a screen, keyboard and mouse will be thrown back to your X server and displayed on your local system. While I’ve used this approach for … well, decades … I find it’s not foolproof. Sometimes the variable doesn’t get set and often pushing X-based programs back through an SSH session can be just plain clunky. So, we’ll try another approach. 

This will be fairly easy and straightforward, as well, though it does require altering the security group in AWS to allow a port through to your Kali instance. The first thing you want to do, though, is to open an SSH session to your Kali instance. Once you are there, run sudo vi /etc/init.d/vncserver to create a script that will be used to start the VNC server at boot that we are going to be using. Once you have vi running (you need to use sudo because you are editing in a directory where you need to have administrative privileges), paste in the following code:

#!/bin/sh
### BEGIN INIT INFO
# Provides: vncserver
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start VNC Server at boot time
# Description: Start VNC Server at boot time.
### END INIT INFO

USER=root
HOME=/root

export USER HOME

case "$1" in
start)
echo "Starting VNC Server"
#Insert your favoured settings for a VNC session
/usr/bin/vncserver :0 -geometry 1280x800 -depth 16 -pixelformat rgb565
;;

stop)
echo "Stopping VNC Server"
/usr/bin/vncserver -kill :0
;;

*)
echo "Usage: /etc/init.d/vncserver {start|stop}"
exit 1
;;
esac

exit 0

Kali Linux uses the newer systemd startup process, though you can still use init scripts with Kali. Once you have the script created (use ‘I’ to insert, then paste the code using Ctrl-V as you normally would, then hit ESC followed by ‘:wq’ to get the text entered and saved — skip the ‘ characters when typing), we need to make sure that Kali uses it when the system boots. In order to do that, run the following:

ec2-user@kali:~$ sudo chmod 755 /etc/init.d/vncserver
ec2-user@kali:~$ sudo update-rc.d vncserver defaults
ec2-user@kali:~$ sudo /etc/init.d/vncserver start

Your Kali instance will add the service as a startup script in the default run levels, which is all we need to do. When you start the VNC server for the first time, you will be asked to set a password. This is a password you will be asked to enter when you connect to the VNC server, so it’s a minimal amount of security to keep unauthorized users out. The last thing to do is allow the VNC traffic through the security group, which is essentially a firewall where you create rules for traffic control. We need to allow TCP port 5900 in. Below, you can see what those rules look like. From the left hand side of the AWS portal, go to Security Groups. You should see one where the Group Name says something that includes Kali Linux. Right-click on that and select Edit Inbound Rules. Once you are there, you can add the rule just the way it’s shown below.

SecurityGroup

If you happen to know the public IP that you are using through your ISP, you can enter that into the Source field but don’t go too crazy or you’ll just end up locking yourself out. If your IP address changes, you will need to change it here to allow yourself VNC access. Once you have saved it, it becomes active. There is nothing further to do.

All you need to do now is to start a VNC client to connect to your server. There are a number of clients, including Screen Sharing on a macOS system. On Windows, you can use the RealVNC client as a reasonably good application to connect to VNC servers. You will be asked for the password you created when you started the VNC server when you are configuring the settings. You will also need the public IP address. When you go to the AWS portal and select your running Kali instance, at the bottom, you will see two lines. One is for the Public DNS (IPv4) and the other is IPv4 Public IP. You can use either of those. Both will likely change when you shut down and start up your Kali instance. Use either the hostname (DNS name) or the IP address and the password you created then connect to your VNC server. You will be presented with a desktop running XFCE, so it doesn’t look like the same desktop as if you were running it locally in a VM. However, it is still a fully functional instance of Kali with the desktop and access to all the applications. 

 

Thursday, September 27, 2018

Kali on AWS

Kali Linux is an incredibly useful distribution for security testing and also open source intelligence gathering. While you can certainly install Kali on a hardware-based system or even in a virtual machine, you can also take advantage of the work other companies have already done. This includes Amazon Web Services (AWS). You don’t have to build an image or install a hypervisor. You just connect to AWS and launch an EC2 instance from the AWS marketplace. We’re going to work through that here, showing you how simple the process is.

This assumes you have an AWS account, which is very easy to setup if you already have an Amazon account and who doesn’t have one of those? I assume everyone else is spending entirely too much money buying stuff that just shows up at your door, just because it takes no thought and almost no effort. I’m not going to walk through the process of creating an account. It should be straightforward enough.

Once you have logged into the AWS portal, you should go to the Instances page from the link on the left hand side. From there, you will see a big blue button that says Launch Instance. This will take you to Step 1 where you will select an AMI image. If you search for Kali, you will find there are several community images as well as one marketplace image. Use the marketplace image, as you can see below.


Once you have selected Kali Linux as your AMI, you will need to select the size of your system. You can definitely select as large a machine as you want, but if you want to go cheap and don’t plan on doing a lot of high-intensity computing, you can use the free tier system, as shown below. This is a t2.micro type with a single CPU and only 1G of memory. You aren’t going to be doing a lot with a system this small but for just playing around with Kali, it should be ample.


This will create a new instance of the Kali Linux image, after which you will need to create authentication credentials. This is done, under Linux, with SSH keys. If you happen to have keys already stored in AWS, you can use them. Otherwise, you can create a new set, just as you can see being done below. Once you have provided a name, you will need to download the key file. This will be Privacy Enhanced Mail (pem) file, containing a certificate that has the encryption keys necessary to establish an encrypted SSH session, as well as authenticate you.


We’re almost done at this point. Your instance will start up after you have downloaded your .pem file and then clicked Launch Instances. You can’t Launch until you have downloaded the key pair, so the Launch button will remain disabled until then. As soon as you launch your instance, it will get provisioned. It takes a couple of minutes or so to start up the instance. Once that happens, it will show up as Running in your instance list. If you right-click, you can select Connect and you will get a window like the one shown below.


In my case, I’m working from a macOS system so I have an ssh client available through the command line (I use iTerm for command line access). Below, you can see changing the permissions on the key file, since ssh won’t make use of the key file unless access to it has been restricted. After that, I just ssh into the remote system. Because I’ve let Amazon do all the work for me, I don’t have to make any modifications to security policies in AWS. It took care of allowing SSH to the public-facing IP address that it allocated for me.


kilroy@binkley  ~/Downloads  chmod 400 Kali.pem

kilroy@binkley  ~/Downloads  ssh -i "Kali.pem” ec2-user@ec2-34-213-11-105.us-west-2.compute.amazonaws.com

The authenticity of host 'ec2-34-213-11-105.us-west-2.compute.amazonaws.com (34.213.11.105)' can't be established.

ECDSA key fingerprint is SHA256:Rv7rErLsH6pch8jxJc6HL+VmzTxZ3TQw7iwm1mJaLok.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'ec2-34-213-11-105.us-west-2.compute.amazonaws.com,34.213.11.105' (ECDSA) to the list of known hosts.

Linux kali 4.17.0-kali1-amd64 #1 SMP Debian 4.17.8-1kali1 (2018-07-24) x86_64

The programs included with the Kali GNU/Linux system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.

Kali GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent

permitted by applicable law.

ec2-user@kali:~$


And that’s all that it takes to get a Kali instance running in AWS! Enjoy!


Kali on AWS

Kali Linux is an incredibly useful distribution for security testing and also open source intelligence gathering. While you can certainly install Kali on a hardware-based system or even in a virtual machine, you can also take advantage of the work other companies have already done. This includes Amazon Web Services (AWS). You don’t have to build an image or install a hypervisor. You just connect to AWS and launch an EC2 instance from the AWS marketplace. We’re going to work through that here, showing you how simple the process is.

This assumes you have an AWS account, which is very easy to setup if you already have an Amazon account and who doesn’t have one of those? I assume everyone else is spending entirely too much money buying stuff that just shows up at your door, just because it takes no thought and almost no effort. I’m not going to walk through the process of creating an account. It should be straightforward enough.

Once you have logged into the AWS portal, you should go to the Instances page from the link on the left hand side. From there, you will see a big blue button that says Launch Instance. This will take you to Step 1 where you will select an AMI image. If you search for Kali, you will find there are several community images as well as one marketplace image. Use the marketplace image, as you can see below.


Once you have selected Kali Linux as your AMI, you will need to select the size of your system. You can definitely select as large a machine as you want, but if you want to go cheap and don’t plan on doing a lot of high-intensity computing, you can use the free tier system, as shown below. This is a t2.micro type with a single CPU and only 1G of memory. You aren’t going to be doing a lot with a system this small but for just playing around with Kali, it should be ample.


This will create a new instance of the Kali Linux image, after which you will need to create authentication credentials. This is done, under Linux, with SSH keys. If you happen to have keys already stored in AWS, you can use them. Otherwise, you can create a new set, just as you can see being done below. Once you have provided a name, you will need to download the key file. This will be Privacy Enhanced Mail (pem) file, containing a certificate that has the encryption keys necessary to establish an encrypted SSH session, as well as authenticate you.


We’re almost done at this point. Your instance will start up after you have downloaded your .pem file and then clicked Launch Instances. You can’t Launch until you have downloaded the key pair, so the Launch button will remain disabled until then. As soon as you launch your instance, it will get provisioned. It takes a couple of minutes or so to start up the instance. Once that happens, it will show up as Running in your instance list. If you right-click, you can select Connect and you will get a window like the one shown below.


In my case, I’m working from a macOS system so I have an ssh client available through the command line (I use iTerm for command line access). Below, you can see changing the permissions on the key file, since ssh won’t make use of the key file unless access to it has been restricted. After that, I just ssh into the remote system. Because I’ve let Amazon do all the work for me, I don’t have to make any modifications to security policies in AWS. It took care of allowing SSH to the public-facing IP address that it allocated for me.


kilroy@binkley  ~/Downloads  chmod 400 Kali.pem

kilroy@binkley  ~/Downloads  ssh -i "Kali.pem” ec2-user@ec2-34-213-11-105.us-west-2.compute.amazonaws.com

The authenticity of host 'ec2-34-213-11-105.us-west-2.compute.amazonaws.com (34.213.11.105)' can't be established.

ECDSA key fingerprint is SHA256:Rv7rErLsH6pch8jxJc6HL+VmzTxZ3TQw7iwm1mJaLok.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'ec2-34-213-11-105.us-west-2.compute.amazonaws.com,34.213.11.105' (ECDSA) to the list of known hosts.

Linux kali 4.17.0-kali1-amd64 #1 SMP Debian 4.17.8-1kali1 (2018-07-24) x86_64

The programs included with the Kali GNU/Linux system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.

Kali GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent

permitted by applicable law.

ec2-user@kali:~$


And that’s all that it takes to get a Kali instance running in AWS! Enjoy!


Tuesday, October 17, 2017

Password Policies

While this has been in process for a while and the guidance has been out for a while, the guidance NIST published in June related to identity management seemed long overdue. For me, this came to a head a few years ago with a client I was working with. They had the then-recommended (best practice to the rescue again) password policies in place. Strong passwords — letters, numbers, different cases, symbols, appropriate length. Passwords rotated every 30 days. No repeat password before 12 passwords had been reused. At least 7 days between password changes. Strong password policy, right? What I told them at the time was they were begging their users to write their passwords down just to keep track of their current one. That, of course, entirely defeated the purpose of the password policy to begin with.

What was even worse was that the administrators and management of the company I was working with had no idea what the purpose of the password policy was to begin with. What exactly is the purpose of rotating passwords and making sure they are incredibly complex? For a start, you make the complex so they can’t be guessed. Unfortunately, with so much horsepower readily available and with rainbow tables so easy to get hold of, even complex passwords that are 8 characters (a common minimum) may be easily cracked by a determined attacker. That’s why you have complex passwords — to make sure they can’t be guessed or determined in a brute force attack. Since it’s possible to crack them anyway, that idea is a bit behind the time.

The reason for rotating them is based on an assumption that someone is getting in using the password. If you rotate the password on a regular basis, you limit the amount of time that an attacker can stay in your system. The assumption also was that with a regular rotation scheme and lower horsepower systems, it could take as long as the rotation time to crack the password. Ultimately, it was about limiting potential access using the password. The reality is that attacks and access are far more likely to take place using social engineering attacks or other ways of gaining access without needing the password.

One of the reasons for writing this up was reading Bruce Schneier’s Crypto-Gram e-mail from this month. He quite rightly points out that the idea of password policy stemmed from attempts to try to fix the users rather than trying to actually resolve the problems that existed. As a result, we as information security professionals have spent a lot of time trying to enforce and detect lapses in security policy compliance.

This is yet another example, to me, of the solution coming before the problem. Without really understanding where the threats were (what the problem was), there was a drive to implement password policies. Worse than that, when businesses implemented strong password policies, they felt they were protected against attack. The reality is that they were left exposed because they had no idea what problem they were trying to solve and they spent time implementing and enforcing password policies rather than truly understanding where their threats and exposures were.

This is a soapbox I get on a lot. It is absolutely essential that time is spent defining and understand the problem to be solved in order to make sure that when a solution is arrived at, it is the right solution and not a red herring that makes people feel like they’ve done something.

Tuesday, October 3, 2017

Chasing Data Using Sleuth Kit

Working on a new set of videos for O'Reilly Media -- basically updating one of the first video titles from back when it was Infinite Skills. In the process, I had to refresh my memory on a number of things. One of them was using The Sleuth Kit tools to run through a disk image to locate the contents of a file. Sure, you could just pop it open in a browser using some commercial tool but where's the fun in that? Autopsy you say? Yeah, but ultimately, Autopsy uses The Sleuth Kit tools to begin with even if you don't see it. Why not just learn what it is that Autopsy does so you can be ahead of the game? Having said that, let's step through how you might go about this.

We're going to be working with a disk image taken from a Linux system and the partition on the disk was formatted with ext4. However, the same steps will work for a Windows disk, particularly if the partition was formatted with NTFS. Since we have a disk image and not a partition image, the first thing we need to do is determine where the partition actually starts. In order to do that, we are going to use the program mmls, which lists all of the partitions in a disk or disk image. We could also use fdisk -l to do essentially the same thing.


What we discover here is that the partition we are looking for starts at byte 2048. The other Sleuth Kit tools we will be using will need to be told what offset to start at because they are really looking for the start of the partition in order to parse the data structures that begin there. Once we know where the partition starts, we can get a list of the files that are in the partition. For this, I'm just going to get a list of active files and not worry about doing a recursive listing down through all the directories (adding a -r). We also aren't going to deleted files (adding a -d). For our purposes, it doesn't much matter whether we have those or not. We are going to use fls and we need to add a -o 2048 to indicate that the offset to where the partition starts is 2048 bytes.


We now have a listing of the small number of files that are in the root directory of this partition. What we get from this listing is whether the entry is a directory (d/d) or a regular file (r/r). The second column is the inode where the metadata for the file is located. The metadata for the file not only includes date information but also, more importantly, the data blocks that belong to the file. Those data blocks are where can get access to the contents of the file. In order to get the data blocks, we are going to use the program istat. This will give us all of the information that the inode has related to the file. Keep in mind that while you think about the file in the context of the filename, on a UFS-based system (ext inherits a lot from UFS, the UNIX File System that goes back to the 70s and 80s with BSD, the Berkeley Systems Distribution), a file is just a collection of related data blocks. We could have multiple filenames that all point to the same "file" on the disk.

Running istat, we provide the offset to the start of the partition, just as we did with fls. Additionally, we provide the image that we are searching and also the inode that we want to interrogate. You can see the results of this below.


Among other things, we can see that the inode has been allocated. It's not free space because it refers to a file. You can see the date and time information. You can also see the permissions that are associated with the file. Additionally, as I mentioned above, different filenames can point to the same set of data blocks (the same inode). The "num of links" entry indicates the number of filenames that point to this inode, and by extension, the data that the inode points to. This is where the "Direct Blocks" entry is important. The direct blocks tells us where to get the contents of the file. For this, we use the blkcat command.


Again, we have to provide the offset because blkcat expects to start with the beginning of the partition, as fls and istat do. We provide the image name then the block number where the data is located. This is followed in this case by the number of blocks we want to extract. By default, we only pull one but since all of the blocks for the file are consecutive, we can pull all of them at once. Beneath that, you can see the contents of the file.

While it's several steps, using mmls to get the partition start, fls to get a listing of files, istat to get the data block address and finally blkcat to extract the file contents, it does help to highlight how the filesystem is put together. Being able to follow this chain, no matter the file or the filesystem, will help with the understanding of the workings of a filesystem such that no matter what tool you are using, you know the process.

Thursday, January 26, 2017

Password Management

Recently, there was a piece on password managers on The Today Show on NBC. The tech guy was blazing through a number of apps for phones since he has such a short period of time to cover what is apparently a lot of ground. Normally, I would have ignored such a presentation. It is generally just so much fluff, after all, relegated to the third or even fourth half hour of a morning newstainment program. Anything even remotely non-fluffy happens in at least the first hour and if it’s actually grounded in reality and based on actual, topical events, it’s in the first half hour. Here we have a short piece that’s essentially lifestyle in nature. No big deal, right? However, there was a big red flag for me that was just inaccurate that needed to be addressed.

The presenter, who shall remain nameless so I don’t besmirch his knowledge or character here, told Matt Lauer that password managers are great so you have all of your passwords (because we all use a different password for every login and Web page we use, right?) in one place. This means you don’t forget them. All you need to do is be able to get into the password manager. Here’s the rub, though. Because you have very helpfully collected them all in one place, you have made it considerably easier for an attacker. All the attacker needs to do is get into your password manager.

Not so fast, you say, as said the aforementioned presenter. You have been informed that the very strongest of encryption is in use within this password manager, making it impregnable. This is the delusion and misunderstanding when it comes to encryption. Encryption is only helpful if someone comes across a file or a disk by itself that has been encrypted. If you run across a stray disk that has been encrypted using something like the Advanced Encryption Standard (AES) with a very large key, say 256 bits, you are going to have a very hard time getting into the drive, unless the key has been somehow attached to the drive. And this is where we have a problem with devices and files that have been encrypted.

In essence, the key is stored with the encrypted data. All someone needs to do is gain access to the password manager using your credentials and the data is unlocked. Just as it would be for you, because the app has no idea it’s not you. Password managers that use a single password, regardless of how strong it is, are vulnerable to attack because all someone needs to do is get that one password and they have your entire cache of passwords. That’s it. It doesn’t matter whether then underlying file is encrypted. Or even if each individual password is encrypted. The passwords will need to be presented to you in the clear if they are to be of any value so if you can authenticate to the password manager, so can the attacker.

Aha, you say! You use your fingerprint. Biometrics to the rescue. The problem with that particular theory is that while your fingerprint may be yours and yours alone, your fingerprint can be acquired. And used against you. Fake fingerprints can be used to fool fingerprint scanners on mobile devices and frankly most any device looking for your fingerprint. You use your fingerprint to get into your password manager but you leave your fingerprints all over the place. It’s not that challenging to acquire your fingerprint and if an attacker can get your phone — either because you left it on your desk while you stepped out of your office for a moment or because they simply stole it from your pocket or purse — they can get access to your passwords from your password manager.

This is not to say that you shouldn’t use a password manager. A determined attacker is probably going to find a way to get your passwords. If it’s not you, it will be someone else and they may get your password by gaining access to a system by way of that someone else. However, if someone gains clear text access to your passwords, it won’t matter a bit how strong they are. You can use a 32-character passphrase with upper and lower case, numbers and symbols. If it’s stored in your password manager and an attacker gets access to your password manager, strength of password doesn’t matter.

If your password manager stores your passwords on an Internet-based storage medium (sometimes called “in the cloud,” though the term is misleading to say the least), there is now a second way an attacker can get access to your data. This is especially true if there is a Web portal for you to look at your passwords or pull them down to use in Web forms through your browser. Now your fingerprint is no longer in play. It’s just down to that username and password combination.

Ideally, sites you visit regularly that store data you actually care about (aside from the throwaway e-mail address you use to log into sites you don’t much care about, for instance) would support two-factor authentication. This means a username and password (something you know) as well as either a soft token (Google Authenticator, Facebook Code Generator) or a text message to your cell phone (something you have). These two factors together can help protect your login access by requiring the attacker to both know your password and either have your phone or be able to intercept data like a text message.

Being aware of the potential challenges of various applications can help you make informed decisions. If you don’t understand what you are signing up for, you are not engaged in informed consent and you certainly are not engaged in managing the risk.