Android Process Memory Dumps with memfetch – Android 4.4.2 (on Ubuntu 16.04)

December 25, 2016 Leave a comment

I used 2 different C code scripts to achieve the same goal of achieving the process memory dump. The specific code scripts are referred to as Memfetch (by Michal Zalewski – found on his blog) and Memdump (by Tal Aloni – found on StackExchange)

Update [2017-01-16]: I’m not sure whether this will work for Android on both x86 and ARM architectures. I tested it on an ARM architecture (physical device), and it worked. I’m yet to test it on an x86 architecture. Will update after testing.

Memfetch:

Find the code from the author’s webpage here – http://lcamtuf.coredump.cx/soft/memfetch.tgz

Unzip/Extract the code from the TarGZ archive

 tar -xvf memfetch.tgz 

Get into the directory:

cd memfetch

Use the ls command to list the files. The files should be listed as below:

COPYING   Makefile   memfetch.c   mffind.pl   README

Now install the gcc compiler for Android on ARM (not sure if this is what it’s described as):

 sudo apt-get install gcc-arm-linux-android-eabi 

(some instructions say use the gcc-arm-linux-gnueabi but this didn’t work for me )

Edit the Makefile

Normally at this point you should be able to run the make command and compiling should work, however in Ubuntu the Canonical developers seem to have moved some key .h source files around causing problems. The first file that might cause problems if you run the make command will probably be  this is because Ubuntu has moved them from the original location of /usr/include/asm to be in the kernel source files /usr/src/linux-headers-[your-specific-kernel]/include/asm-generic

Make sure you’ve installed build-essential for this path to be existent

 sudo apt-get install build-essential 

You can get to the correct path with:

cd /usr/src/linux-headers-$(uname -r)/include/asm-generic

Once you locate the asm-generic folder check that the page.h file is present.

Now the best way to solve this problem is to create a symbolic link (symlink) in /usr/include/ called asm that links to /usr/src/linux-headers-[your-specific-kernel]/include/asm-generic/ . This is done with the following command:

sudo ln -s  /usr/src/linux-headers-$(uname -r)/include/asm-generic  /usr/include/asm 

Even with this, there will still be some problems because there are some .h files in asm-generic that will be looking for asm-generic in /usr/include/ where the folder doesn’t actually have those header files. So an extra include (-I) directive will need to be added in the Makefile

The beginning of your make file should look like this:

FILE = memfetch
CFLAGS = -Wall -O9 -static
CC = arm-linux-androideabi-gcc

NB: its a capital ‘O’ not a zero, and it’s a 9 (nine), not a ‘g’. The “O” “9” directive is some optimization thing (i don’t know if it’s necessary or not)

Run make at this point. If it works, then great, you should get a memfetch executable file in your memfetch directory if not, follow on.

If you run make and you still get errors of missing .h files, what i did was to just copy the files from /usr/src/linux-headers-$(uname -r)/include/asm-generic  to /usr/include/

e.g:

 sudo cp /usr/src/linux-headers-$(uname -r)/include/asm-generic/memory_model.h  /usr/include/asm-generic/memory_model.h

The following files were missing …

  • getorder.h
  • /linux/compiler.h
  • /linux/log2.h
  • /linux/bitops.h
  • /linux/irqflags.h
  • /linux/typecheck.h

At this point i got more files in /bitops that were missing, so i decided to copy the entire directory :

cd  /usr/include/asm-generic
sudo mkdir bitops
sudo cp /usr/src/linux-headers-$(uname -r)/include/asm-generic/bitops/* /usr/include/asm-generic/bitops/

At this point i finally ran the make command in the the memfetch directory and an executable was created. There were a couple of warnings, but no errors and the executable worked when I pushed into onto the Android device.

Pushing to the Android Device and Executing “Memfetch”:

NB: We are assuming that the device is properly rooted, and the setting for giving adb shell root permissions has been set in your “Super User” management app.

Go to the adbexecutable location, which might be /home/Android/Sdk/platform-tools it could also be elsewhere … depending on where you installed it

cd /home/Android/Sdk/platform-tools

The best location to push the executable is /data/local/tmp. Let’s create a directory in this location and use the adb push command to push the executable here

./adb shell
su root
cd /data/local/tmp
mkdir mem_dump_tools
exit
exit

We exited first all the way out so that we can run the ./adb push command

./adb push ~/Desktop/memfetch/memfetch /data/local/tmp/mem_dump_tools/

Verify that the memfetch executable has been pushed to the right location:

./adb shell
su root
cd /data/local/tmp/mem_dump_tools
ls -al

The memfetch executable should be in place however it cannot be executed because it does not have execute permissions. We can give it execute permissions with the following command (assuming we are still the root user)

chmod 755 memfetch

(As a side note: chmod u+x memfetch should also work.)

Verify that the Execute permissions have been applied

ls -al

You should see rwx against the name of the memfetch executable. (The x being the important thing)

Now if we run a particular app and search this process’ ID we can dump the process memory. Pick an app e.g. Google Chrome and fire it. Browse to some page

On the adb shell:

ps | grep chrome

You should get 1-3 processes with Chrome (one with sandboxed and another with privileged attached to the process name). Pick the process ID of the process that is plain com.android.chrome

Now we can run memfetch

./memfetch

e.g: ./memfetch 2314 if the process id is “2314”

You should now get some output to screen showing that the memory-mapped regions are being copied. The result is that for each address range (block) from the /proc//mem folder there is a sub folder called map that contains the mappings. These mappings result in an individual “region dump” per file (with a .bin extension) and each region dump filename is appended into a single file with a /lst extension containing all the filenames of all the regions dumped. So the end result is a lot of .bin files and a single .lst file.

NB: If at this point when you try to run memfetch  and all you get is a listing of the available options/directives, and nothing else, then you need to comment out some section of the code in memfetch.c and recompile. I don’t know why this is the case, but someone on StackExchange [2] figured this out and it also worked for me.

The lines to comment out are:


while ((opt=getopt(argc,(void*)argv, "+samwS:h"))!=EOF)
    switch(opt)
       case 's': waitsig=1; break;
       case 'a': skipmap=1; break;
       case 'w': textout=1; break;
       case 'm': avoid_mmap=1; break;
       case 'S': if (sscanf(optarg,"%x",&onlyseg)!=1)
            fatal("Incorrect -S syntax (hex address expected).\n");
            break;
       default: usage(argv[0]); }

With that, everything should work.

This blog post has become too long, so i’ll do memdump in the next one …

Sources:

[1]. http://lcamtuf.coredump.cx/memfetch.tgz

[2]. http://stackoverflow.com/questions/18372120/memfetch-with-android-samsung-galaxy-nexus

Android Process Memory Dumps – Notes

December 24, 2016 Leave a comment

Disclaimer: I don’t really understand everything about the workings of RAM memory and the OS. These are just my notes on how i got RAM Process Memory Dumps of Android Apps.

Intro:

Capturing the process memory from a specific running process (application) in Android seems to have been more difficult that I thought. That’s probably because of the way Android is built that processes run under their own individual users and their respecting permissions.

Reading directly from /proc/<pid>/mem also seems to have been hindered since a process cannot read another process’ memory in Android (i think in some other Unix/Linux distributions at least reading seems to be possible)

A lot of sources talk about capturing “heap” dumps, but i wanted the entire process memory including the stack, the instructions (and essentially anything else). Heap dumps can be acquired through the DDMS tool in Android Studio (and somehow similarly in Eclipse also). The basic idea is that Android Studio provides RAM profiling tools for analyzing app runtime behaviour.

You can take heap dump from DDMS. According to most sources, it seems it needs to be converted from the default HPROF format to something that can be analyzed by the Java MAT tool (i’m not sure but i think DDMS now does all this automatically for you).

What I wanted was a full memory dump of the process and I couldn’t seem to find a way except through using the memfetch tool (by Michal Zalewski) compiled for Android or some smartly written script called memdump (by Tal Aloni) found on StackExchange.

Both scripts are written in C, so I had to compile them for Android and get them running on a phone in order to achieve my goal … and how this was done is the subject of the next post.

Major Sources:

[1]. Sylve, J., Case, A., Marziale, L., Richard, G.G.: Acquisition and analysis of volatile memory from android devices. Digital Investigation. 8, 175–184 (2012). here or here

[2]. http://security.stackexchange.com/questions/62300/memory-dumping-android

[3]. http://lcamtuf.coredump.cx (look for the memfetch code here)

Android FileSystem – Notes

December 19, 2016 Leave a comment

App Locations:

  • /system/apps – Pre-installed bloatware apps
  • /system/priv-apps – Privileged apps (mounted read-only to prevent changes)
  • /data/app – Normal apps in internal memory
  • /mnt/sdcard/.android_secure – Apps stored on external memory go into an encrypted container
    • /mnt/asec – These apps need to be decrypted to run, so during runtime they are found as a decrypted copy on a tmpfs here
    • This .android_secure container cannot be opened directly from the Android device, however if you plug the SD Card into another computer through a card reader, the .apk files now have the extension .asec connected to the same files on /mnt/asec
  • Nxt

App Data:

  • /data/data/<package_name> – Default location for application data on internal storage.
  • /mnt/sdcard/Android/data/<package_name> – Default location for application data on external storage (if the developer sticks to the rules outlined on the Android Developer Documentation here)

Binary Executable Test Locations:

  • /data/local/tmp – Location where you can put executables (NDK compiled / Linux ARM built)

Accessing the SDCard on the Emulator:

First make sure you’ve indicated that you want an SD Card for your Android Virtual Device in the AVD Manager while creating

  • You can find the path of your sd card with cat /proc/mounts and df -h
  • It should be at /mnt/media_rw/<8-Character-Serial-Number>
    • e.g. /mnt/media_rw/1CEF-2AB1

Sources:

[1]. http://android.stackexchange.com/questions/3002/where-in-the-file-system-are-applications-installed

DNS Tunneling Dataset (Notes)

September 9, 2016 Leave a comment

A: Tunneled Traffic over DNS:

Total Samples size:

  •  HTTP:
    • Static: 50 Samples
      • Websites that seem to maintain the same appearance (images, text) over a few hours and more
    • Dynamic: 50 Samples
      • Websites who’s visual contents (images. text, ads) seem to change within the hour, or even randomly
  • FTP:
    • FTP Downloads: 50 Samples
    • FTP Uploads: 50 Samples
  • HTTPS:
    • Static: 50 Samples
      • Websites that seem to maintain the same appearance (images, text) over a few hours and more
    • Dynamic: 50 Samples
      • Websites who’s visual contents (images. text, ads) seem to change within the hour, or even randomly
  • POP3:
    • Email Downloads: 50 Samples

B: Plain Traffic (Not tunneled over DNS):

(… not yet documented)

 

 

Categories: Uncategorized

Using wget in interesting ways

February 17, 2016 Leave a comment

Make a web request for a web-page and all its resources in order to display correctly, but delete everything immediately after being downloaded:

wget -H -p -e robots=off --delete-after http://www.google.com 
  • -H [or] --span-hosts
  • -p [or] --page-requisites
  • -e robots=off [or] --execute
  • --delete-after

Other useful options

  • --no-dns-cache [...Turn off caching of DNS lookups]
  • --no-cache [...to disable server-side cache so as to always get the latest page]

If you want to store the pages then the -E and -K directives may be of use

wget -E -H -K -p -e robots=off http://www.google.com 
  • -E [or] --adjust-extension
  • -K [or] --backup-converted

If the web-server that you are fetching pages from blocks automated web-requests based on the user-agent, you can fool it with the following directive:

  • -U [or] --user-agent=""

If you don’t want to use the –user-agent option you can create a .wgetrc file in the home directory such that wget will always use the pre-configured user-agent

Example ./wgetrc

### Sample Wget initialization file .wgetrc by http://www.askapache.com
## Local settings (for a user to set in his $HOME/.wgetrc).  It is
## *highly* undesirable to put these settings in the global file, since
## they are potentially dangerous to &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;normal&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot; users.
##
## Even when setting up your own ~/.wgetrc, you should know what you
## are doing before doing so.
header = Accept-Language: en-us,en;q=0.5
header = Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
header = Connection: keep-alive
user_agent = Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.109 Safari/537.36
referer = /
robots = off&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/pre&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;

References:

Setting up DNS Tunneling with iodine on Ubuntu 14.04, 15.10

January 18, 2016 2 comments

So I thought setting up DNS Tunneling was as easy as getting the server software running, then getting the client software running and once the tunnel is set up we’re good to go.

Easier said than done. The tunnel is set up but the routing also needs to be set up such that traffic goes through the tunnel. Ahaaa … yes that’s where things got tricky.

Summarily, the steps that need to be done are:

Phase I: Getting the DNS server and domain configurations done

Here you’ll need a domain name and access to the authoritative name server. Alternatively you can use a free domain name service that can provide you with the capability of configuring the A resource record (assigning a domain name to an IP) and the NS resource record (assigning the name or IP of the authoritative name server).

(Afraid.org is a nice free service that allows you to make use of free public subdomains and modify the necessary DNS records for this experiment)

Phase II: Getting the Tunneling ready

  1. Set up your network with an internal client machine, a firewall that locks down the internal machine and a machine that has a (static) public address
  2. Download the tunneling software (in my case iodine)
  3. Install (Make, Make install, Make test etc)
  4. Run the server
  5. Run the client

Phase III: Getting the routing of traffic through the tunnel and the forwarding at the server side

  1. Set up forwarding of traffic received on the tunnel to go out the server’s physical interface
  2. Create a route in the routing table of the client machine that tells the machine to go out through the normal interface to the normal gateway when looking for the usual/normal DNS server (So only DNS traffic to this server has a “default route”)
  3. Remove the original Default Gateway, and replace it with with the IP of the DNS Tunneling server within the tunnel

Now let’s get straight into the mix of how to configure this …

Step-by-Step Configuration:

The network set up:

  • A Client machine within a private/internal LAN: 192.168.XX.XX
  • A pfSense firewall with 2 interfaces (one to the internal LAN, that is the gateway 192.168.XX.1) and the other on a public network with a public IP.
  • A server with a public address ZZZ.ZZZ.ZZZ.ZZZ  (lets make it 123.123.123.123 for illustrative purposes)

The diagram below illustrates the set-up.

 

Phase I: Getting the DNS server and domain configurations done

Sign up at afraid.org. Either create a new domain or make use of a publicly usable shared sub-domain. Set the A record to be the IP address of your iodine server e.g. 123.123.123.123, and set the name to be something that you’ll remember e.g. test.ignorelist.com. Set the NS record name (testns.ignorelist.com) to point to the domain name (test.ignorelist.com)

That is:

test.ignorelist.com - A - 123.123.123.123
testns.ignorelist.com - NS - test.ignorelist.com

In effect both the (authoritative) name server and the domain itself are at the same IP, though the domain name actually doesn’t seem to be used visibly (to my knowledge).

Phase II: Getting the Tunneling Ready:

Get the latest version of the iodine repository to your server machine. Assuming you’re on Ubuntu for both the client and the server, you’ll need to install git first:

sudo apt-get install git

Clone the latest version of iodine from it’s github repository into whichever directory you please. I chose to clone it to the desktop:

cd Desktop
git clone https://github.com/yarrick/iodine.git

Build, install and test the source:

NB: While doing the “make” “make install” and “make test”, it may complain that it’s missing some C header files (eg. zlib.h and check.h), so you should install ‘zlib1g-dev’ and ‘check’ packages/libraries

sudo apt-get install zlib1g-dev check

Then:

cd iodine
make
make install

Run the tests:

make test

If you’ve so far you’ve set this up on the server machine, then you need to repeat the process on the client machine so as to get the iodine package installed also on the client.

CHECK POINT: At this point you should have iodine installed on both your client and server machines.

Now we can run them so as to get the tunnel set up  (To be honest, I haven’t read deeply in into the man pages for the details of the  parameters /options available for different capabilities, so i’ll just note down what worked for me)

Run the server:

 sudo iodined -c -D 10.0.0.1 testns.ignorelist.com 

It may ask you for a password, which i  used Password01. Alternatively you could use the -P directive and assign it a password directly in the command.

 sudo iodined -c -P Password01 -D 10.0.0.1 testns.ignorelist.com 

Run the client:

sudo iodine -P Password01 testns.ignorelist.com 

At this point the tunnel should connect with the server having an IP of 10.0.0.1 and the client having an IP of 10.0.0.2. A new tun/tap interface (dns0) should also appear on both the client and the server ipconfig output. There should also be some “keep-alive” pings being logged on the server side terminal.

From the client, ping the server at 10.0.0.2:

ping 10.0.0.2 

If the ping succeeds then we’ve made some good progress in setting up the tunnel, … but there’s still more to go.

Phase III: Getting the routing of traffic through the tunnel and the forwarding at the server side

This is done in order to get all the traffic routed from the client machine through the DNS tunnel (dns0) to the server physical interface and onward to the requested resource, and that subsequent responses received on the server side physical interface come back through the tunnel server and are forwarded back through the DNS tunnel to the client.

NB: These steps are only necessary if you want to get ‘all’ traffic from the client passing through the tunnel. If that’s not your desire, then you can skip Phase III.

Setting up IP forwarding on the SERVER SIDE involves changing the ip_forward flag and setting some iptables (ip_tables) rules

 sudo bash -c 'echo 1 &gt; /proc/sys/net/ipv4/ip_forward' 

Check that the flag has been set to 1

cat /proc/sys/net/ipv4/ip_forward 

(many times this ip_forward flag does not persist through reboots)

Set up the NAT rules in iptables to NAT traffic from the dns0 interface to the eth0 interface and vice versa:

sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o dns0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i dns0 -o eth0 -j ACCEPT

If you were unlucky like me, something was wrong with iptables and it could not be found despite the package being installed

 sudo modprobe iptables 

or

 sudo modprobe ip_tables 

should sort out the problem. If they don’t then you might have to check that the kernel object exists and if it doesn’t then an update of the the kernel package as follows, then running modprobe might help

 /lib/modules/'uname -r'/kernel/net/ipv4/netfilter/iptables.ko 

or

 /lib/modules/'uname -r'/kernel/net/ipv4/netfilter/ip_tables.ko 

Re-installing/updating the kernel image

sudo apt-get install linux-image-$(uname -r)

Run modprobe iptables again as seen above

Now for the tricky part (CLIENT SIDE):

Fix the routing table on the CLIENT machine such that DNS traffic goes legitimately to the local DNS server that usually does domain look-ups on your client’s behalf, out through the original default gateway

sudo route add -host [IP-Address of local DNS server] gw [original default gateway] 

e.g:

sudo route add -host 123.123.123.10 gw 192.168.1.1 

i.e. adding a route for a host (in this case, the DNS host machine), rather than a network via the normal gateway

The next 2 steps need to be done quickly (if there is a sizeable delay, for some reason the dns0 interface disappears and you won’t be able to add it as the actual default gateway)

Remove the original default gateway:

sudo route del default

Replace the default gateway with the tunnel device interface (dns0)

sudo route add default dev dns0 gw 10.0.0.1 

Read as “add default gateway as 10.0.0.1 through device dns0”

At this point you should be able to ping 10.0.0.1 from the client and get a successful response indicating that the tunnel is active and connected.

To test that HTTP traffic is being tunneled over the DNS tunnel, open up Wireshark on the client machine and start a capture on eth0. If you open any webpage e.g. http://www.google.com you should see a flurry of DNS traffic that also corresponds with the verbose output visible on the server terminal

Some gotchas that might trip you up:

Check that the dns0 interfaces are up on both client and servers and that they are pingable between the 2. Sometimes the dns0 interface on the client could disappear unnannounced and you might have to start the client side of the tunnel again and do the routing set up again. Restart the server first for good measure.

Check that the ip_forward flag on the server side is still set to 1

Useful commands for dealing with the SSH Known Hosts File

August 11, 2015 Leave a comment

Some useful things that you can use with ssh-keygen :
Listing all the entries in the known_hosts file:

ssh-keygen -l -f /home/myUserName/.ssh/known_hosts

Listing the a specific entry
(with IP address only)

ssh-keygen -l -f /home/myUserName/.ssh/known_hosts -F 192.168.1.1

(with IP address and port)

ssh-keygen -l -f /home/myUserName/.ssh/known_hosts -F [192.168.1.1]:8888

Listing the hash of a specific entry:

ssh-keygen -H -F 192.168.1.1 -f "/home/myUserName/.ssh/known_hosts" 

Getting the fingerprint off a public-key file (e.g: id_dsa.pub, id_rsa.pub)

ssh-keygen -lf ~/.ssh/id_rsa.pub
ssh-keygen -lf /home/myUserName/.ssh/id_rsa.pub

Removing an entry from the known_hosts file (using the IP address only):

ssh-keygen -f "/home/myUserName/.ssh/known_hosts" -R ip-address

e.g:

ssh-keygen -f "/home/User1/.ssh/known_hosts" -R 192.168.1.1

Removing an entry from the known_hosts file (using the IP address and port) (Commonly seen with services that use non-standard ports):

ssh-keygen -f "/home/myUserName/.ssh/known_hosts" -R [ip-address]:port

e.g:

ssh-keygen -f "/home/User1/.ssh/known_hosts" -R [192.168.1.1]:8888

Also useful:
Sometimes you may have changed the owner of the known_hosts file by mistake thus making removal of entries impossible. So, to change the owner back to the particular user:

chown userName:Group /path/to/file
Categories: InfoSec, Networking Tags: , , ,

Wireless Security and Goal Line Technology

May 12, 2015 Leave a comment

So i was watching a Football (Soccer) match last night (Arsenal – Swansea) and a goal was scored that no one actually saw (well, except the striker who believed it was a goal, or rather wanted others to believe it was a goal – i’m not sure whether he even saw it). Anyhow, the ball crossed the goal-line by about 15cm, so fast, but at the same time the goalkeeper reacted also swiftly enough and pawed it out of the goal. Quite possibly, no one actually saw it. No one really reacted. Even the commentators had no idea whether it was a goal or not.

The turn around moment was the the Referee whistled for a restart at center. His watch linked to the Goal-Line Technology (i think at some point they called it Hawk-Eye) caught the actual moment of the entire ball crossing over the line. Arsenal 0 – Swansea (Technology) 1.

Now, for me it was the first time that i had seen literally no one react to a purported goal claim. Even the Referee shrugged his shoulders and pointed to the watch seemingly saying: “I don’t know, i didn’t see it … but the watch vibrated, so it means the ball crossed the line. The goal is awarded.”

The amount of trust put in that system is incredible. Ok, to the credit of the system developers/implementers/testers – it should also be said that there has been talk of the system having been tested and scrutinized thoroughly. I don’t know how much it has been tested, but i’d like to think about it this way: It probably still does have vulnerabilities, just that there has not been enough incentive for someone to exploit the system.

Let’s ask some quick hypothetical questions (even if I have actually no knowledge of how the system actually works):

  • Is there mutual authentication between the system detecting the ball crossing the line and the referee’s watch?
  • Can someone perform a denial of service (jamming the radio signals) on the watch or the system, such that even if the ball crosses the line, the referee doesn’t get any feedback?
  • Could i randomly send signals on a certain frequency to the referee’s watch during the match such that they are so random that he thinks that the system is malfunctioning?
  • Can the camera-replay system (that they use for confirmation) be adjusted in real-time to move the round figure of the ball slightly further over the line, or slightly less depending on the desired outcome (goal given, or goal disallowed)?

There are probably other questions that can be asked. These are just a quick few.

Notes on the Regin Malware

February 10, 2015 Leave a comment

Came across an article downplaying the sophistication of the Regin Malware:

http://www.tripwire.com/state-of-security/incident-detection/why-regin-malware-isnt-the-next-stuxnet/

I thought it was worth making a note about the common techniques that are said to have been used previously. The quotes below are taken directly from the TripWire article by Ken Westin

Many of the “sophisticated” techniques used by Regin have been seen before:

  • Regin’s use of naming a driver something innocuous dates back to some of the first viruses floating around in the DOS days
  • The use of a kernel memory pool tag that is generic is not particularly novel, as it’s the default for most drivers
  • Hiding the MZ executable header marker from an executable memory image is an old technique, as well, dating back to the earliest days of 16-bit DOS executables
  • Hiding encrypted data in the registry or NTFS “extended attribute streams” is something the OS does for legitimate reasons and a technique used by many forms of malware (eg. ZeroAccess)
  • Encrypting data for transport is now standard practice for pretty much all malware

Leveraging forensic artifacts are useful in identifying known malware, but more important is the ability to detect patterns and behavior of malware and quickly searching for indicators of compromise when signatures and artifacts are known.

Android 5.0 – Lollipop – Easter Egg (Flappy Bird, or is it Flappy Android)

November 20, 2014 Leave a comment

I just discovered a weird easter egg from the Android Lollipop developer team (Now that i’ve googled it, i’m not the first to find it, there are a couple of other sites that report it…). Anyhow, there’s a an adaptation of the famous / infamous Flappy Bird app hidden among the phone settings.

How can you get to it:

  • Go to the phone settings
  • Go to the bottom most option “About Phone”. Tap it
  • Look for the Android Version: 5.0 menu item. Tap it about 6-7 times rapidly (it has to be rapid otherwise it won’t go through)
  • A new screen will appear with a round button at the centre. Tap on it and you’ll get a lollipop with the words “lollipop” on it
  • Tap and hold on the “round head” of the lollipop and it will open the Flappy Bird / Flappy Android app.. (Tap and hold a couple of times and it should eventually open. If you just tap it, only the colour of the lollipop will change, but the screen will not switch to the app)
  • Once the lollipop disappears, tap the screen once more and the app will start.

Cheers. Happy gaming!

Categories: Android Tags: