Published articles Tiny Tiny RSS/1.15 2015-11-12T08:25:19+00:00 Houston we have a problem!

Of the many undesirable results of the Space Program is the fetishization of the "mission control center", with it's rows of workstations facing a common central screen. Ever since, anybody with any sort of mission now has a similar control center.

It's a pain for us in the cybersecurity community because every organization wants a "security operations center" laid out the same way. The point of he room isn't to create something that's efficient for working, but one that will impress visitors. The things done to impress customers can often make an already difficult job even more difficult.

I point this out because of the "glowing globe" picture from President Trump's visit to Saudi Arabia. It's supposed to celebrate the opening of the "Global Center for Combating Extremist Ideology" ( Zoom the camera out a bit, and you can see it's the mission control center from hell.

Manually counting, I see three sides, each with slightly more than 100 workstations/employees, or more than 300 in total. I don't know if they intend all three sections to focus on the same sets of problems, or if they are split into three different tasks (e.g. broadcast TV vs. Internet content). Their brochure is unclear. I suspect in the long it'll be full of third country nations from a broad swath of Muslim nations who can speak the local languages and dialects, working in a sweat-shop manner.

In any case, it's clear that the desire for show/spectacle has far outstripped any practical use.

The more I read about this, the more Orwellian it seems. Rather than opposing ISIS's violence, it seems more intent on promoting a Saudi ideology. The whole spectacle seems intent on tricking the Trump administration into supporting something it really should be opposing.

2017-05-23T01:46:00+00:00 (Robert Graham) 2017-05-23T01:46:00+00:00 Errata Security There’s new evidence tying WCry ransomware worm to prolific hacking group

Enlarge (credit: Health Service Journal)

Researchers have found more digital fingerprints tying this month's WCry ransomware worm to the same prolific hacking group that attacked Sony Pictures in 2014 and the Bangladesh Central Bank last year.

Last week, a researcher at Google identified identical code found in a WCry sample from February and an early 2015 version of Contopee, a malicious backdoor used by the hacking team Lazarus Group. The group has been operating since at least 2011. Additional fingerprints linked Lazarus Group to hacks that wiped almost a terabyte's worth of data from Sony Pictures and siphoned a reported $81 million from the Bangladesh Central Bank last year. Researchers say Lazarus Group carries out hacks on behalf of North Korea.

On Monday, researchers from security firm Symantec presented additional evidence that further builds the case that WCry, which is also known as WannaCry, is closely linked to Lazarus Group. The evidence includes:

Read 3 remaining paragraphs | Comments

2017-05-23T03:34:04+00:00 Dan Goodin 2017-05-23T03:34:04+00:00 Ars Technica Improved high DPI display support in the pipeline

Support for high DPI monitors has been included in Fedora Workstation for some time now. If you use a monitor with a high enough DPI, Fedora Workstation automatically scales all the elements of the Desktop to a 2:1 ratio, and everything would display crisply and not too small.  However, there are a couple of caveats with the current support. The scaling can currently only be either 1:1 or 2:1, there is no way to have fractional ratios. Additionally, the DPI scaling applies to all displays attached to your machine. So if you have a laptop with a high DPI and an external monitor with lower DPI, the scaling can get a little odd. Depending on your setup, one of the displays will render either super-small, or super-large.

A mockup of how running the same scaling ratio on a low DPI and high DPI monitor might look. The monitor on the right is a 24inch desktop monitor with over sized window decorations.

A mockup of how running the same scaling ratio on a low DPI and high DPI monitor might look. The monitor on the right is a 24inch desktop monitor with over sized window decorations.

Both of these limitations have technical reasons; such as how to deal with fractions of pixels when scaling by something other than 2. However, in a recent blogpost, developer Matthias Clasen talks about how the technical issues in the underlying system have been addressed. To introduce mixed-DPI settings, the upstream developers have per-monitor framebuffers, updated the monitor configuration API, and added support for mixed DPIs to the Display Panel. Work is also underway upstream to tackle the fractional scaling issue. For further techincal details, be sure to read the post by Matthias. All this awesome work by the upstream developers means that in a Fedora release in the not to distant future, high DPI support will be much much better.

2017-05-23T05:27:40+00:00 Ryan Lerch 2017-05-23T05:27:40+00:00 Fedora People Your Password is Already In the Wild, You Did not Know?

There was a lot of buzz about the leak of two huge databases of passwords a few days ago. This has been reported by Try Hunt on his blog. The two databases are called “Anti-Trust-Combo-List” and “Exploit.In“. If the sources of the leaks are not officially known, there are some ways to discover some of them (see my previous article about the “+” feature offered by Google).

A few days after the first leak, a second version of “Exploit.In” was released with even more passwords:










Exploit.In (2)



With the huge of amount of passwords released in the wild, you can assume that your password is also included. But what are those passwords? I used Robbin Wood‘s tool pipal to analyze those passwords.

I decided to analyze the Anti-Trust-Combo-List but I had to restart several times due to a lack of resources (pipal requires a lot of memory to generate the statistics) and it failed always. I decided to use a sample of the passwords. I successfully analyzed 91M passwords. The results generated by pipal are available below.

What can we deduce? Weak passwords remain classic. Most passwords have only 8 characters and are based on lowercase characters. Interesting fact: users like to “increase” the complexity of the password by adding trailing numbers:

  • Just one number (due to the fact that they have to change it regularly and just increase it at every expiration)
  • By adding their birth year
  • By adding the current year
Basic Results

Total entries = 91178452
Total unique entries = 40958257

Top 20 passwords
123456 = 559283 (0.61%)
123456789 = 203554 (0.22%)
passer2009 = 186798 (0.2%)
abc123 = 100158 (0.11%)
password = 96731 (0.11%)
password1 = 84124 (0.09%)
12345678 = 80534 (0.09%)
12345 = 76051 (0.08%)
homelesspa = 74418 (0.08%)
1234567 = 68161 (0.07%)
111111 = 66460 (0.07%)
qwerty = 63957 (0.07%)
1234567890 = 58651 (0.06%)
123123 = 52272 (0.06%)
iloveyou = 51664 (0.06%)
000000 = 49783 (0.05%)
1234 = 35583 (0.04%)
123456a = 34675 (0.04%)
monkey = 32926 (0.04%)
dragon = 29902 (0.03%)

Top 20 base words
password = 273853 (0.3%)
passer = 208434 (0.23%)
qwerty = 163356 (0.18%)
love = 161514 (0.18%)
july = 148833 (0.16%)
march = 144519 (0.16%)
phone = 122229 (0.13%)
shark = 121618 (0.13%)
lunch = 119449 (0.13%)
pole = 119240 (0.13%)
table = 119215 (0.13%)
glass = 119164 (0.13%)
frame = 118830 (0.13%)
iloveyou = 118447 (0.13%)
angel = 101049 (0.11%)
alex = 98135 (0.11%)
monkey = 97850 (0.11%)
myspace = 90841 (0.1%)
michael = 88258 (0.1%)
mike = 82412 (0.09%)

Password length (length ordered)
1 = 54418 (0.06%)
2 = 49550 (0.05%)
3 = 247263 (0.27%)
4 = 1046032 (1.15%)
5 = 1842546 (2.02%)
6 = 15660408 (17.18%)
7 = 14326554 (15.71%)
8 = 25586920 (28.06%)
9 = 12250247 (13.44%)
10 = 11895989 (13.05%)
11 = 2604066 (2.86%)
12 = 1788770 (1.96%)
13 = 1014515 (1.11%)
14 = 709778 (0.78%)
15 = 846485 (0.93%)
16 = 475022 (0.52%)
17 = 157311 (0.17%)
18 = 136428 (0.15%)
19 = 83420 (0.09%)
20 = 93576 (0.1%)
21 = 46885 (0.05%)
22 = 42648 (0.05%)
23 = 31118 (0.03%)
24 = 29999 (0.03%)
25 = 25956 (0.03%)
26 = 14798 (0.02%)
27 = 10285 (0.01%)
28 = 10245 (0.01%)
29 = 7895 (0.01%)
30 = 12573 (0.01%)
31 = 4168 (0.0%)
32 = 66017 (0.07%)
33 = 1887 (0.0%)
34 = 1422 (0.0%)
35 = 1017 (0.0%)
36 = 469 (0.0%)
37 = 250 (0.0%)
38 = 231 (0.0%)
39 = 116 (0.0%)
40 = 435 (0.0%)
41 = 45 (0.0%)
42 = 57 (0.0%)
43 = 14 (0.0%)
44 = 47 (0.0%)
45 = 5 (0.0%)
46 = 13 (0.0%)
47 = 1 (0.0%)
48 = 16 (0.0%)
49 = 14 (0.0%)
50 = 21 (0.0%)
51 = 2 (0.0%)
52 = 1 (0.0%)
53 = 2 (0.0%)
54 = 22 (0.0%)
55 = 1 (0.0%)
56 = 3 (0.0%)
57 = 1 (0.0%)
58 = 2 (0.0%)
60 = 10 (0.0%)
61 = 3 (0.0%)
63 = 3 (0.0%)
64 = 1 (0.0%)
65 = 2 (0.0%)
66 = 9 (0.0%)
67 = 2 (0.0%)
68 = 2 (0.0%)
69 = 1 (0.0%)
70 = 1 (0.0%)
71 = 3 (0.0%)
72 = 1 (0.0%)
73 = 1 (0.0%)
74 = 1 (0.0%)
76 = 2 (0.0%)
77 = 1 (0.0%)
78 = 1 (0.0%)
79 = 3 (0.0%)
81 = 3 (0.0%)
83 = 1 (0.0%)
85 = 1 (0.0%)
86 = 1 (0.0%)
88 = 1 (0.0%)
89 = 1 (0.0%)
90 = 6 (0.0%)
92 = 3 (0.0%)
93 = 1 (0.0%)
95 = 1 (0.0%)
96 = 16 (0.0%)
97 = 1 (0.0%)
98 = 3 (0.0%)
99 = 2 (0.0%)
100 = 1 (0.0%)
104 = 1 (0.0%)
107 = 1 (0.0%)
108 = 1 (0.0%)
109 = 1 (0.0%)
111 = 2 (0.0%)
114 = 1 (0.0%)
119 = 1 (0.0%)
128 = 377 (0.0%)

Password length (count ordered)
8 = 25586920 (28.06%)
6 = 15660408 (17.18%)
7 = 14326554 (15.71%)
9 = 12250247 (13.44%)
10 = 11895989 (13.05%)
11 = 2604066 (2.86%)
5 = 1842546 (2.02%)
12 = 1788770 (1.96%)
4 = 1046032 (1.15%)
13 = 1014515 (1.11%)
15 = 846485 (0.93%)
14 = 709778 (0.78%)
16 = 475022 (0.52%)
3 = 247263 (0.27%)
17 = 157311 (0.17%)
18 = 136428 (0.15%)
20 = 93576 (0.1%)
19 = 83420 (0.09%)
32 = 66017 (0.07%)
1 = 54418 (0.06%)
2 = 49550 (0.05%)
21 = 46885 (0.05%)
22 = 42648 (0.05%)
23 = 31118 (0.03%)
24 = 29999 (0.03%)
25 = 25956 (0.03%)
26 = 14798 (0.02%)
30 = 12573 (0.01%)
27 = 10285 (0.01%)
28 = 10245 (0.01%)
29 = 7895 (0.01%)
31 = 4168 (0.0%)
33 = 1887 (0.0%)
34 = 1422 (0.0%)
35 = 1017 (0.0%)
36 = 469 (0.0%)
40 = 435 (0.0%)
128 = 377 (0.0%)
37 = 250 (0.0%)
38 = 231 (0.0%)
39 = 116 (0.0%)
42 = 57 (0.0%)
44 = 47 (0.0%)
41 = 45 (0.0%)
54 = 22 (0.0%)
50 = 21 (0.0%)
48 = 16 (0.0%)
96 = 16 (0.0%)
49 = 14 (0.0%)
43 = 14 (0.0%)
46 = 13 (0.0%)
60 = 10 (0.0%)
66 = 9 (0.0%)
90 = 6 (0.0%)
45 = 5 (0.0%)
71 = 3 (0.0%)
56 = 3 (0.0%)
92 = 3 (0.0%)
79 = 3 (0.0%)
98 = 3 (0.0%)
63 = 3 (0.0%)
61 = 3 (0.0%)
81 = 3 (0.0%)
51 = 2 (0.0%)
58 = 2 (0.0%)
65 = 2 (0.0%)
53 = 2 (0.0%)
67 = 2 (0.0%)
68 = 2 (0.0%)
76 = 2 (0.0%)
111 = 2 (0.0%)
99 = 2 (0.0%)
73 = 1 (0.0%)
72 = 1 (0.0%)
74 = 1 (0.0%)
70 = 1 (0.0%)
69 = 1 (0.0%)
77 = 1 (0.0%)
78 = 1 (0.0%)
64 = 1 (0.0%)
109 = 1 (0.0%)
114 = 1 (0.0%)
119 = 1 (0.0%)
83 = 1 (0.0%)
107 = 1 (0.0%)
85 = 1 (0.0%)
86 = 1 (0.0%)
104 = 1 (0.0%)
88 = 1 (0.0%)
89 = 1 (0.0%)
57 = 1 (0.0%)
100 = 1 (0.0%)
55 = 1 (0.0%)
93 = 1 (0.0%)
52 = 1 (0.0%)
95 = 1 (0.0%)
47 = 1 (0.0%)
97 = 1 (0.0%)
108 = 1 (0.0%)


One to six characters = 18900217 (20.73%)
One to eight characters = 58813691 (64.5'%)
More than eight characters = 32364762 (35.5%)

Only lowercase alpha = 25300978 (27.75%)
Only uppercase alpha = 468686 (0.51%)
Only alpha = 25769664 (28.26%)
Only numeric = 9526597 (10.45%)

First capital last symbol = 72550 (0.08%)
First capital last number = 2427417 (2.66%)

Single digit on the end = 13167140 (14.44%)
Two digits on the end = 14225600 (15.6%)
Three digits on the end = 6155272 (6.75%)

Last number
0 = 4370023 (4.79%)
1 = 12711477 (13.94%)
2 = 5661520 (6.21%)
3 = 6642438 (7.29%)
4 = 3951994 (4.33%)
5 = 4028739 (4.42%)
6 = 4295485 (4.71%)
7 = 4055751 (4.45%)
8 = 3596305 (3.94%)
9 = 4240044 (4.65%)

 | | 
|||| ||| | 

Last digit
1 = 12711477 (13.94%)
3 = 6642438 (7.29%)
2 = 5661520 (6.21%)
0 = 4370023 (4.79%)
6 = 4295485 (4.71%)
9 = 4240044 (4.65%)
7 = 4055751 (4.45%)
5 = 4028739 (4.42%)
4 = 3951994 (4.33%)
8 = 3596305 (3.94%)

Last 2 digits (Top 20)
23 = 2831841 (3.11%)
12 = 1570044 (1.72%)
11 = 1325293 (1.45%)
01 = 1036629 (1.14%)
56 = 1013453 (1.11%)
10 = 909480 (1.0%)
00 = 897526 (0.98%)
13 = 854165 (0.94%)
09 = 814370 (0.89%)
21 = 812093 (0.89%)
22 = 709996 (0.78%)
89 = 706074 (0.77%)
07 = 675624 (0.74%)
34 = 627901 (0.69%)
08 = 626722 (0.69%)
69 = 572897 (0.63%)
88 = 557667 (0.61%)
77 = 557429 (0.61%)
14 = 539236 (0.59%)
45 = 530671 (0.58%)

Last 3 digits (Top 20)
123 = 2221895 (2.44%)
456 = 807267 (0.89%)
234 = 434714 (0.48%)
009 = 326602 (0.36%)
789 = 318622 (0.35%)
000 = 316149 (0.35%)
345 = 295463 (0.32%)
111 = 263894 (0.29%)
101 = 225151 (0.25%)
007 = 222062 (0.24%)
321 = 221598 (0.24%)
666 = 201995 (0.22%)
010 = 192798 (0.21%)
777 = 164454 (0.18%)
011 = 141015 (0.15%)
001 = 138363 (0.15%)
008 = 137610 (0.15%)
999 = 129483 (0.14%)
987 = 126046 (0.14%)
678 = 123301 (0.14%)

Last 4 digits (Top 20)
3456 = 727407 (0.8%)
1234 = 398622 (0.44%)
2009 = 298108 (0.33%)
2345 = 269935 (0.3%)
6789 = 258059 (0.28%)
1111 = 148964 (0.16%)
2010 = 140684 (0.15%)
2008 = 111014 (0.12%)
2000 = 110456 (0.12%)
0000 = 108767 (0.12%)
2011 = 103328 (0.11%)
5678 = 102873 (0.11%)
4567 = 94964 (0.1%)
2007 = 94172 (0.1%)
4321 = 92849 (0.1%)
3123 = 92104 (0.1%)
1990 = 87828 (0.1%)
1987 = 87142 (0.1%)
2006 = 86640 (0.1%)
1991 = 86574 (0.09%)

Last 5 digits (Top 20)
23456 = 721648 (0.79%)
12345 = 261734 (0.29%)
56789 = 252914 (0.28%)
11111 = 116179 (0.13%)
45678 = 96011 (0.11%)
34567 = 90262 (0.1%)
23123 = 84654 (0.09%)
00000 = 81056 (0.09%)
54321 = 73623 (0.08%)
67890 = 66301 (0.07%)
21212 = 28777 (0.03%)
23321 = 28767 (0.03%)
77777 = 28572 (0.03%)
22222 = 27754 (0.03%)
55555 = 26081 (0.03%)
66666 = 25872 (0.03%)
56123 = 21354 (0.02%)
88888 = 19025 (0.02%)
99999 = 18288 (0.02%)
12233 = 16677 (0.02%)

Character sets
loweralphanum: 47681569 (52.29%)
loweralpha: 25300978 (27.75%)
numeric: 9526597 (10.45%)
mixedalphanum: 3075964 (3.37%)
loweralphaspecial: 1721507 (1.89%)
loweralphaspecialnum: 1167596 (1.28%)
mixedalpha: 981987 (1.08%)
upperalphanum: 652292 (0.72%)
upperalpha: 468686 (0.51%)
mixedalphaspecialnum: 187283 (0.21%)
specialnum: 81096 (0.09%)
mixedalphaspecial: 53882 (0.06%)
upperalphaspecialnum: 39668 (0.04%)
upperalphaspecial: 18674 (0.02%)
special: 14657 (0.02%)

Character set ordering
stringdigit: 41059315 (45.03%)
allstring: 26751651 (29.34%)
alldigit: 9526597 (10.45%)
othermask: 4189226 (4.59%)
digitstring: 4075593 (4.47%)
stringdigitstring: 2802490 (3.07%)
stringspecial: 792852 (0.87%)
digitstringdigit: 716311 (0.79%)
stringspecialstring: 701378 (0.77%)
stringspecialdigit: 474579 (0.52%)
specialstring: 45323 (0.05%)
specialstringspecial: 28480 (0.03%)
allspecial: 14657 (0.02%)

[The post Your Password is Already In the Wild, You Did not Know? has been first published on /dev/random]

2017-05-19T10:53:58+00:00 Xavier 2017-05-19T10:53:58+00:00 /dev/random The Arctic seed vault had to deal with melting permafrost last winter

Enlarge (credit: Mari Tefre/Svalbard Globale frøhvelv)

In Arctic Svalbard, there is a vault that might sound like a sci-fi plot device. Completed in 2008, the Global Seed Vault is a remote archive for safeguarding seeds for thousands of crop varieties. If anything dramatic should happen elsewhere around the world, we want these seeds to be there.

The vault consists of a giant freezer room bored into a mountain, protected by the bedrock around it and the permafrost above it. But according to a report in The Guardian, the vault experienced an unhappy surprise recently—melting permafrost in winter.

The Arctic just experienced its second-warmest winter on record (surpassed only by 2016), and Svalbard saw remarkable temperatures and even rain. In fact, Svalbard averaged more than 4 °C above even the 2004-2013 average.

Read 2 remaining paragraphs | Comments

2017-05-19T22:00:26+00:00 Scott K. Johnson 2017-05-19T22:00:26+00:00 Ars Technica Fractional scaling goes east

When we introduced HiDPI support in GNOME a few years ago, we took the simplest possible approach that was feasible to implement with the infrastructure we had available at the time.

Some of the limitations:

  • You either get 1:1 or 2:1 scaling, nothing in between
  • The cut-off point that is somewhat arbitrarily chosen and you don’t get a say in it
  • In multi-monitor systems, all monitors share the same scale

Each of these limitations had technical reasons. For example, doing different scales per-monitor is hard to do as long as you are only using a single, big framebuffer for all of them. And allowing scale factors such as 1.5 leads to difficult questions about how to deal with windows that have a size like 640.5×480.5 pixels.

Over the years, we’ve removed the technical obstacles one-by-one, e.g. introduced per-monitor framebuffers. One of the last obstacles was the display configuration API that mutter exposes to the control-center display panel, which was closely modeled on XRANDR, and not suitable for per-monitor and non-integer scales. In the last cycle, we introduced a new, more suitable monitor configuration API, and the necessary support for it has just landed in the display panel.

With this, all of the hurdles have been cleared away, and we are finally ready to get serious about fractional scaling!

Yes, a hackfest!

Jonas and Marco happen to both be in Taipei in early June, so what better to do than to get together and spend some days hacking on fractional scaling support:

If you are a compositor developer (or plan on becoming one), or just generally interested in helping with this work, and are in the area, please check out the date and location by following the link. And, yes, this is a bit last-minute, but we still wanted to give others a chance to participate.

2017-05-19T18:34:33+00:00 mclasen 2017-05-19T18:34:33+00:00 Planet GNOME Keylogger Found in HP Laptop Audio Drivers

This is a weird story: researchers have discovered that an audio driver installed in some HP laptops includes a keylogger, which records all keystrokes to a local file. There seems to be nothing malicious about this, but it's a vivid illustration of how hard it is to secure a modern computer. The operating system, drivers, processes, application software, and everything else is so complicated that it's pretty much impossible to lock down every aspect of it. So many things are eavesdropping on different aspects of the computer's operation, collecting personal data as they do so. If an attacker can get to the computer when the drive is unencrypted, he gets access to all sorts of information streams -- and there's often nothing the computer's owner can do.

2017-05-17T11:32:14+00:00 Bruce Schneier 2017-05-17T11:32:14+00:00 Schneier on Security WordPress Now on HackerOne

WordPress has grown a lot over the last thirteen years – it now powers more than 28% of the top ten million sites on the web. During this growth, each team has worked hard to continually improve their tools and processes. Today, the WordPress Security Team is happy to announce that WordPress is now officially on HackerOne!

HackerOne is a platform for security researchers to securely and responsibly report vulnerabilities to our team. It provides tools that improve the quality and consistency of communication with reporters, and will reduce the time spent on responding to commonly reported issues. This frees our team to spend more time working on improving the security of WordPress.

The security team has been working on this project for quite some time. Nikolay Bachiyski started the team working on it just over a year ago. We ran it as a private program while we worked out our procedures and processes, and are excited to finally make it public.

With the announcement of the WordPress HackerOne program we are also introducing bug bounties. Bug bounties let us reward reporters for disclosing issues to us and helping us secure our products and infrastructure. We’ve already awarded more than $3,700 in bounties to seven different reporters! We are thankful to Automattic for paying the bounties on behalf of the WordPress project.

The program and bounties cover all our projects including WordPress, BuddyPress, bbPress, GlotPress, and WP-CLI as well as all of our sites including,,,, and

2017-05-15T16:02:19+00:00 Aaron D. Campbell 2017-05-15T16:02:19+00:00 WordPress Development Blog Microsoft’s Response to WannaCrypt
WannaCrypt ransomware spreading through a computer lab in University of Milano-Bicocca / @UID_

In a recent blog post, Microsoft argued that the use of a vulnerability for Windows XP stolen from the NSA and released by the Shadow Brokers has caused widespread damage in the public domain, and the lesson that governments should learn from this incident is that government stockpiling of vulnerabilities that might be inadvertently revealed presents a hazard to safe computing around the world.

It’s certainly fair to suggest that the risks of government stockpiling vulnerabilities present a downside risk to safe computing that needs to be taken into account in deciding whether or not to reveal vulnerabilities to vendors so that they can be fixed.  But the Microsoft statement implies that the only reasonable outcome of such a decision is to reveal vulnerabilities, and that doesn’t follow at all.  That downside risk does add some additional weight to that side of the argument, but in any given instance, it might not be sufficient to tilt the scale in that direction depending on the weight on the other side of the argument.

Moreover and as the blog post states, Microsoft issued a fix for the vulnerability in question in March—a month before it was released by the Shadow Brokers.  Good cyber hygiene would suggest that patches should be applied when they are made available, and WannaCrypt struck two months after that patch was issued.  If I don’t wash my hands before eating and I get sick, it is indeed the fault of the microbes in the environment.  But if I have a long history of not washing my hands before eating and not getting sick, it just means I’ve been lucky—not that I have microbial immunity.  I should be washing my hands before every meal unless there’s some very good reason for not doing so, and system administrators should be patching their systems when patches are available unless there’s some very good reason for not doing so.

And finally, Windows XP has been supplanted by Windows 7 and Windows 10.  Old systems are more vulnerable than newer systems, and administrators who are trying to save on costs by not moving to newer systems will usually run greater risks of compromise.

Does NSA bear any responsibility for the outbreak of WannaCrypt through its stockpiling of some vulnerabilities that were subsequently revealed?  Sure, in the sense that if it had refrained from obtaining vulnerabilities at all, no vulnerabilities would have been released and the WannaCrypt creators would not have had the Shadow Brokers dump as a resource.  (Of course, we don’t actually know that those creators used the Shadow Brokers dump—that’s an assumption that I happen to believe, but it is also possible that they would have discovered it independently.  After all, Microsoft apparently did as well (but see footnote)). 

But one could argue just as well that Github—the distribution channel for the Shadow Brokers—was equally responsible for making the vulnerability and exploit code widely available.  So why isn’t anyone complaining about Github’s actions in this regard?  At the very least, both entities share some degree of responsibility—NSA for allowing the vulnerability to be leaked, and Github for publicizing it.

Microsoft has been advocating the idea that government commit to disclosing all vulnerabilities for since Feb 2017. Different people can find the arguments for this idea more or less persuasive depending on their analysis (I’m personally not persuaded), but in my view, the WannaCrypt incident does not significantly strengthen the arguments for it as the blog post suggests it does.

Footnote: If Microsoft did have advance notice of the vulnerability from NSA that enabled it to fix the problem before the Shadow Brokers dump, that fact would indicate that NSA did reveal the vulnerability to the vendor, possibly albeit with some delay from its acquisition.  Nick Weaver makes the same point in his recent posting.

Herb Lin
Dr. Herb Lin is senior research scholar for cyber policy and security at the Center for International Security and Cooperation and Research Fellow at the Hoover Institution, both at Stanford University. His research interests relate broadly to policy-related dimensions of cybersecurity and cyberspace, and he is particularly interested in and knowledgeable about the use of offensive operations in cyberspace, especially as instruments of national policy. In addition to his positions at Stanford University, he is Chief Scientist, Emeritus for the Computer Science and Telecommunications Board, National Research Council (NRC) of the National Academies, where he served from 1990 through 2014 as study director of major projects on public policy and information technology, and Adjunct Senior Research Scholar and Senior Fellow in Cybersecurity (not in residence) at the Saltzman Institute for War and Peace Studies in the School for International and Public Affairs at Columbia University. Prior to his NRC service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986-1990), where his portfolio included defense policy and arms control issues. He received his doctorate in physics from MIT.
    Display Facebook comments: 
    FB Author Image URL:
    Publish Date: 
    Monday, May 15, 2017, 8:48 AM
    Show Secondary Ad: 
    2017-05-15T12:50:38+00:00 HLin 2017-05-15T12:50:38+00:00 Lawfare The Criminals Behind WannaCry

    359,000 computers infected, dozens of nations affected world-wide! A worm exploiting a Windows OS vulnerability that looks to the network for more computers to infect! This is the most pernicious, evil, dangerous attack, ever.

    "The Big One” Wired pronounced.

    "An unprecedented attack”, said the head of Europol.

    Queue the gnashing of teeth and hand-wringing!

    Wait, what? WannaCry isn't unprecedented! Why would any professional in the field think so? I'm talking about Code Red, and it happened in July, 2001.

    Since then dozens, perhaps hundreds of Best Common Practice documents (several of which I've personally worked on) have been tireless written, published, and evangelized, apparently to no good effect. Hundreds of thousands, perhaps millions of viruses and worms have come and gone.

    Our words 'update your systems, software, and anti-virus software' and 'back up your computer', ignored. The object lesson taught by Code Red, from almost sixteen years ago, forgotten.

    Criminal charges should be considered: Anyone who administers a system that touches critical infrastructure, and whose computers under their care were made to Cry, if people suffered, or died, as is very much the possibility for the NHS patients in the UK, should be charged with negligence. Whatever ransom was paid should be taken from any termination funds they receive, and six weeks pay deducted, since they clearly were not doing their job for at least that long.

    Harsh? Not really. The facts speak for themselves. A patch was available at least six weeks prior (and yesterday was even made available by Microsoft for 'unsupported' platforms such as Windows XP), as was the case with Code Red.

    One representative from a medical association said guilelessly, in one of the many articles I've read since Friday 'we are very slow to update our computers'. This from someone with a medical degree. Yeah, thanks for the confirmation, pal.

    The worm has been stopped from spreading. For now. was registered by a security researcher, and sinkholed.

    Sorry, forget it. I went for a coffee while writing this, and predictably WannaCry V2 has since been spotted in the wild, without the kill-switch domain left dangling.

    What have we learned from all of this, all of this for a lousy $26,000?

    If someone gets arrested and charged, and by someone, I mean systems administrators, 'CSOs' and anyone else in line to protect systems who abjectly failed this time, a lot. WannaCry infections to critical infrastructure are an inexcusable professional lapse. Or, we could just do all of this again, next time, and people may die.

    Afterthought: My organization, recently turned 20 years old. When it started, we didn't believe things could get this bad, but it wasn't too soon after that it became apparent. I issued dire warnings about botnets in 2001 to the DHS, I made public pronouncements to these ends in 2005 (greeted by rolled eyes from an RCMP staff sergeant). I may have been a little too prescient for my own good at the time, but can anyone really say, in this day and age, that lives are at stake, and we are counting on those responsible for data safety to at least do the bare minimum? I await your comments, below.

    Written by Neil Schwartzman, Executive Director, The Coalition Against unsolicited Commercial Email - CAUCE

    Follow CircleID on Twitter

    More under: Cybercrime, Malware, Cybersecurity

    2017-05-14T15:00:00+00:00 Neil Schwartzman 2017-05-14T15:00:00+00:00 CircleID Global ‘Wana’ Ransomware Outbreak Earned Perpetrators $26,000 So Far

    As thousands of organizations work to contain and clean up the mess from this week’s devastating Wana ransomware attack, the fraudsters responsible for releasing the digital contagion are no doubt counting their earnings and congratulating themselves on a job well done. But according to a review of the Bitcoin addresses hard-coded into Wana, it appears the perpetrators of what’s being called the worst ransomware outbreak ever have made little more than USD $26,000 so far from the scam.

    Victims of the Wana ransomware will see this lock screen demanding a $300 ransom to unlock all encrypted files.

    Victims of the Wana ransomware will see this lock screen demanding a $300 ransom to unlock all encrypted files.

    The Wana ransomware became a global epidemic virtually overnight this week, after criminals started distributing copies of the malware with the help of a security vulnerability in Windows computers that Microsoft patched in March 2017. Infected computers have all their documents and other important user files scrambled with strong encryption, and victims without access to good backups of that data have two choices: Kiss the data goodbye, or pay the ransom — the equivalent of approximately USD $300 worth of the virtual currency Bitcoin.

    According to a detailed writeup on the Wana ransomware published Friday by security firm Redsocks, Wana contains three bitcoin payment addresses that are hard-coded into the malware. One of the nice things about Bitcoin is that anyone can view all of the historic transactions tied a given Bitcoin payment address. As a result, it’s possible to tell how much the criminals at the helm of this crimeware spree have made so far and how many victims have paid the ransom.

    A review of the three payment addresses hardcoded into the Wana ransomware strain indicates that these accounts to date have received 100 payments totaling slightly more than 15 Bitcoins — or approximately $26,148 at the current Bitcoin-to-dollars exchange rate.


    It is possible that the crooks responsible for this attack maintained other Bitcoin addresses that were used to receive payments in connection with this attack, but there is currently no evidence of that. It’s worth noting that the ransom note Wana popped up on victim screens (see screenshot above) included a “Contact Us” feature that may have been used by some victims to communicate directly with the fraudsters. Also, I realize that in many ways USD $26,000 is a great deal of money.

    However, I find it depressing to think of the massive financial damage likely wrought by this ransom campaign in exchange for such a comparatively small reward. It’s particularly galling because this attack potentially endangered the lives of many. At least 16 hospitals in the United Kingdom were diverting patients and rescheduling procedures on Friday thanks to the Wana outbreak, meaning the attack may well have hurt people physically (no deaths have been reported so far, thank goodness).

    Unfortunately, this glaring disparity is par for the course with cybercrime in general. As I observed on several occasions in my book Spam Nation — which tracked the careers of some of the most successful malware writers and pharmacy pill spammers on the planet — it was often disheartening to see how little money most of those guys made given the sheer amount of digital disease they were pumping out into the Internet on a daily basis.

    In fact, very few of these individuals made much money at all, and yet they were responsible for perpetuating a global crime machine that inflicted enormous damage on businesses and consumers. A quote in the book from Stefan Savage, a computer science professor at the University of California, San Diego (UCSD) encapsulates the disparity quite nicely and seems to have aged quite well:

    “What’s fascinating about all this is that at the end of the day, we’re not talking about all that much money,” Savage said. “These guys running the pharma programs are not Donald Trumps, yet their activity is going to have real and substantial financial impact on the day-to-day lives of tens of millions of people. In other words, for these guys to make modest riches, we need a multibillion-dollar industry to deal with them.”

    2017-05-13T20:10:43+00:00 BrianKrebs 2017-05-13T20:10:43+00:00 Krebs on Security Securing Elections

    Technology can do a lot more to make our elections more secure and reliable, and to ensure that participation in the democratic process is available to all. There are three parts to this process.

    First, the voter registration process can be improved. The whole process can be streamlined. People should be able to register online, just as they can register for other government services. The voter rolls need to be protected from tampering, as that's one of the major ways hackers can disrupt the election.

    Second, the voting process can be significantly improved. Voting machines need to be made more secure. There are a lot of technical details best left to the voting-security experts who can deal with them, but such machines must include a paper ballot that provides a record verifiable by voters. The simplest and most reliable way to do that is already practiced in 37 states: optical-scan paper ballots, marked by the voters and counted by computer, but recountable by hand.

    We need national security standards for voting machines, and funding for states to procure machines that comply with those standards.

    This means no Internet voting. While that seems attractive, and certainly a way technology can improve voting, we don't know how to do it securely. We simply can't build an Internet voting system that is secure against hacking because of the requirement for a secret ballot. This makes voting different from banking and anything else we do on the Internet, and it makes security much harder. Even allegations of vote hacking would be enough to undermine confidence in the system, and we simply cannot afford that. We need a system of pre-election and post-election security audits of these voting machines to increase confidence in the system.

    The third part of the voting process we need to secure is the tabulation system. After the polls close, we aggregate votes -- ­from individual machines, to polling places, to precincts, and finally to totals. This system is insecure as well, and we can do a lot more to make it reliable. Similarly, our system of recounts can be made more secure and efficient.

    We have the technology to do all of this. The problem is political will. We have to decide that the goal of our election system is for the most people to be able to vote with the least amount of effort. If we continue to enact voter suppression measures like ID requirements, barriers to voter registration, limitations on early voting, reduced polling place hours, and faulty machines, then we are harming democracy more than we are by allowing our voting machines to be hacked.

    We have already declared our election system to be critical national infrastructure. This is largely symbolic, but it demonstrates a commitment to secure elections and makes funding and other resources available to states. We can do much more. We owe it to democracy to do it.

    This essay previously appeared on

    2017-05-14T18:57:02+00:00 Bruce Schneier 2017-05-14T18:57:02+00:00 Schneier on Security Would You Like Your Private Information to be Available on a VHS or Betamax Tape?

    When I was a young child growing up in the late 1980s, my parents were lucky enough to be able to afford to have both a VHS-tape video-recorder in the living room and a Betamax tape recorder in their bedroom. This effectively meant that to me, the great video format wars weren't a decade-defining clash of technologies, but rather they consisted mainly of answering the question "in which room can I watch my favorite cartoons?". It is only now with the perspective of time that I realize that my small dilemma was the result of two distinct groups with contradictory interests bidding for control of a massive market of home video users.

    I was reminded of this piece of digital archeology with the recent news regarding the repeal of the FCC rules regarding internet privacy, partly because I'm starting to recognize similar patterns to the video wars in the field of digital privacy, the kind of patterns that should give business leaders and stakeholders in privacy-sensitive business pause as to a potentially strategic business consideration that lies in the immediate future.

    It comes as no news to privacy practitioners that there is a long-existing schism between the European approach to digital privacy and the American approach to the subject: The US legislative and administrative bodies generally tend to adopt more business-friendly regulations prohibiting the abuse of information but permitting its commodification and trade, while the European stance is to consider digital privacy as a human rights issue (in some European-influenced jurisdictions, such as Israel, the concept of privacy is even explicitly designated as a basic human right and afforded constitutional protection).

    The European legal institutions have consistently shown that they are not deterred by the international implications of their rulings (as demonstrated recently by the repeal of the Safe Harbour program that took place following the October 2015 decision in Schrems v. DPC, necessitating the expedited negotiations of the Privacy Shield agreement) — which is why I believe we're on the verge of a major event, one in which the distance between the two legal perceptions of privacy systems becomes impossible to bridge.

    When one takes into account the EU's General Data Protection Regulation (set to enter into effect in spring of 2018) and contrasts it with the recent repeal of the FCC's rules, it is impossible not to notice that battle lines are being drawn. This is particularly true given the fact that the GDPR applies not only to data processed or located inside the scope of the EU itself — but also applies personally to the citizens of the EU nations themselves, even if they are not physically currently in the EU.

    Under this principle, the latest move by American authorities not to prohibit ISPs from selling information that was until now accepted as private poses therefore an interesting challenge: if a German citizen purchases the services of an American VPN provider to mask her IP address, and said VPN provider routinely sells the information of its clients — would it be allowed to sell the sensitive information it gathers regarding the browsing habits of its German customer? Alternatively, if an American citizen purchases the services of an Estonian VPN — would the information gathered by the Estonian ISP be eligible for sale under the FCC's new, slimmer rules? Furthermore - suppose a more remote but still possible case in which an ISP with multiple local subsidiaries or partnerships wishes to balance the load on its network by routing some of its Icelandic or Irish traffic through its New York sister-company. Would the information of the Irish users be available for sale under the laws of the United States, and if so, to what extent would the sale be permissible?

    It will be interesting to see if these trends will fully materialize into radically distinct views of the concept of digital privacy. The ever-growing distance between the two views is slowly but surely leading to a situation in which Europe's stance on digital security and privacy is not only noticeably stricter than the American interpretation, but it is also becoming effectively incompatible with it. This may eventually force all of us to choose whether to comply with either the American rules or the European rules, as we will be unable to conform to both at the same time.

    Both sides have strong arguments, and both can make compelling cases for their position, but both sides also have weaknesses in their positions and neither are immune from criticism. But more interestingly, both sides also have significant economic advantages and disadvantages that can quickly turn the debate from a principled discussion on what privacy means and how it's enforced, into a stand-off between two of the world's largest economies.

    If the debate between the two approaches eventually evolves into a business decision at the level of independent corporations and people, then much like the video format wars of old, it is only a matter of time until eventually one set of rules triumphs over the other, as markets are wont to do. But unlike the question of where a young boy in Mexico will spend the early hours of a lazy weekend in front of the television, the decision as to who can access our browsing habits and for what purpose can have far more comprehensive ramifications. Which approach will ultimately triumph remains to be seen.

    Written by Jonathan Braverman, Legal and Operations Officer at Cymmetria

    Follow CircleID on Twitter

    More under: Internet Governance, Policy & Regulation, Privacy, Cybersecurity

    2017-05-10T20:28:00+00:00 Jonathan Braverman 2017-05-10T20:28:00+00:00 CircleID Tesla starts pre-orders on solar roof for $1,000, rolls out calculator for costs

    Enlarge / Tesla is starting pre-orders on smooth and textured black glass solar roofs. (credit: Tesla)

    Tesla CEO Elon Musk announced on Twitter on Wednesday that the company’s solar roof panels would be available for pre-order that afternoon. In a press conference Wednesday afternoon, Tesla and SolarCity executives said the roof would be cheaper, on the whole, than installing a regular tile roof (although not cheaper than an asphalt roof). Pre-orders require a $1,000 payment to secure a place on the list.

    Tesla also rolled out a calculator on its website using data from Google Sunroof, a 2015 project from the search giant that used 3D modeling to map out every house’s potential for solar panel output. Tesla’s calculator factors in the cost of a 14kWh Powerwall, although purchase of a Powerwall is not required to get a solar roof, as well as any tax incentives that a customer might receive in their state. The "energy value" number featured most prominently is calculated over 30 years, which is the length of the warranty covering power production from the tiles. (Tesla is offering an "infinity warranty" on the tiles themselves.)


    Read 10 remaining paragraphs | Comments

    2017-05-10T19:15:55+00:00 Megan Geuss 2017-05-10T19:15:55+00:00 Ars Technica Cable lobby conducts survey, finds that Americans want net neutrality

    (credit: M3Li55@)

    As US cable companies push to eliminate or change net neutrality rules, the industry's primary lobby group today released the results of a survey that it says shows "strong bipartisan consensus that the government should let the Internet flourish without imposing burdensome regulations."

    But proponents of keeping the current rules can find plenty to like in the survey conducted by NCTA—The Internet & Television Association. A strong majority of the 2,194 registered American voters in the survey support the current net neutrality rules that prohibit ISPs from blocking, throttling, or prioritizing online content in exchange for payment. While most opposed price regulation, a majority supported an approach in which regulators take action against ISPs on a case-by-case basis when consumers are harmed—the exact same approach the Federal Communications Commission uses under its existing net neutrality regime.

    Full results of the NCTA survey conducted with Morning Consult are available here.

    Read 19 remaining paragraphs | Comments

    2017-05-11T19:15:59+00:00 Jon Brodkin 2017-05-11T19:15:59+00:00 Ars Technica Sprint sues government over elimination of broadband price caps

    Enlarge / Money. (credit: Getty Images | GP Kidd)

    Sprint and Windstream sued the Federal Communications Commission this week over a decision that will help AT&T, Verizon, and CenturyLink charge higher prices for certain business Internet services.

    The FCC last month voted to eliminate price caps for the so-called Business Data Services (BDS) that are offered by incumbent phone companies throughout the country. The FCC decision to which Sprint and Windstream object only eliminated price caps in "competitive" markets, but it uses a standard that deems many local markets competitive even when there's only one broadband provider.

    Sprint and Windstream both purchase bandwidth from BDS providers to boost their own networks. The Sprint/Windstream complaint in the US Court of Appeals for the DC Circuit alleges that the FCC decision is "arbitrary, capricious, and an abuse of discretion." The complaint also asserts that the FCC decision violates federal laws including "the notice-and-comment requirements of the Administrative Procedure Act."

    Read 5 remaining paragraphs | Comments

    2017-05-11T20:35:54+00:00 Jon Brodkin 2017-05-11T20:35:54+00:00 Ars Technica Microsoft Issues WanaCrypt Patch for Windows 8, XP

    Microsoft Corp. today took the unusual step of issuing security updates to address flaws in older, unsupported versions of Windows — including Windows XP and Windows 8. The move is a bid to slow the spread of the WanaCrypt ransomware strain that infected tens of thousands of Windows computers virtually overnight this week.

    A map tracking the global spread of the Wana ransomware strain. Image:

    A map tracking the global spread of the Wana ransomware strain. Image:

    On Friday, May 12, countless organizations around the world began fending off attacks from a ransomware strain variously known as WannaCrypt, WanaDecrypt and Wanna.Cry. Ransomware encrypts a victim’s documents, images, music and other files unless the victim pays for a key to unlock them.

    It quickly became apparent that Wanna was spreading with the help of a file-sharing vulnerability in Windows. Microsoft issued a patch to fix this flaw back in March 2017, but organizations running older, unsupported versions of Windows (such as Windows XP) were unable to apply the update because Microsoft no longer supplies security patches for those versions of Windows.

    The software giant today made an exception to that policy after it became clear that many organizations hit hardest by Wanna were those still running older, unsupported versions of Windows.

    “Seeing businesses and individuals affected by cyberattacks, such as the ones reported today, was painful,” wrote Phillip Misner, principal security group manager at the Microsoft Security Response Center. “Microsoft worked throughout the day to ensure we understood the attack and were taking all possible actions to protect our customers.”

    The update to address the file-sharing bug that Wanna is using to spread is now available for Windows XP, Windows 8, and Windows Server 2003 via the links at the bottom of this advisory.

    On Friday, at least 16 hospitals in the United Kingdom were forced to divert emergency patients after computer systems there were infected with Wanna. According to multiple stories in the British media, approximately 90 percent of care facilities in the U.K.’s National Health Service are still using Windows XP – a 16-year-old operating system.

    According to a tweet from Jakub Kroustek, a malware researcher with security firm Avast, the company’s software has detected more than 100,000 instances of the Wana ransomware.

    For advice on how to harden your systems against ransomware, please see the tips in this post.

    2017-05-13T13:00:06+00:00 BrianKrebs 2017-05-13T13:00:06+00:00 Krebs on Security Cisco kills leaked CIA 0-day that let attackers commandeer 318 switch models

    Cisco Systems has patched a critical flaw that even novice hackers could exploit using Central Intelligence Agency attack tools that were recently leaked to the Internet.

    As previously reported, the zero-day exploit allowed attackers to issue commands that remotely execute malicious code on 318 models of Cisco switches. The attack code was published in early March by WikiLeaks as part of its Vault7 series of leaks, which the site is billing as the largest publication of intelligence documents ever.

    The bug resides in the Cisco Cluster Management Protocol (CMP), which uses the telnet protocol to deliver signals and commands on internal networks. It stems from a failure to restrict telnet options to local communications and the incorrect processing of malformed CMP-only telnet options.

    Read 2 remaining paragraphs | Comments

    2017-05-09T20:41:20+00:00 Dan Goodin 2017-05-09T20:41:20+00:00 Ars Technica Criminals are Now Exploiting SS7 Flaws to Hack Smartphone Two-Factor Authentication Systems

    I've previously written about the serious vulnerabilities in the SS7 phone routing system. Basically, the system doesn't authenticate messages. Now, criminals are using it to hack smartphone-based two-factor authentication systems:

    In short, the issue with SS7 is that the network believes whatever you tell it. SS7 is especially used for data-roaming: when a phone user goes outside their own provider's coverage, messages still need to get routed to them. But anyone with SS7 access, which can be purchased for around 1000 Euros according to The Süddeutsche Zeitung, can send a routing request, and the network may not authenticate where the message is coming from.

    That allows the attacker to direct a target's text messages to another device, and, in the case of the bank accounts, steal any codes needed to login or greenlight money transfers (after the hackers obtained victim passwords).

    2017-05-10T11:50:11+00:00 Bruce Schneier 2017-05-10T11:50:11+00:00 Schneier on Security Mozilla and Thunderbird are continuing together, with conditions

    Enlarge (credit: Mozilla)

    The Thunderbird e-mail client still has its supporters, but for the past couple of years, Mozilla has been making moves to distance itself from the project. In late 2015, Mozilla announced that it would be looking for a new home for Thunderbird, calling its continued maintenance "a tax" on Firefox development.

    Yesterday, Mozilla Senior Add-ons Technical Editor Philipp Kewisch announced Mozilla's future plans for Thunderbird—the Mozilla Foundation will "continue as Thunderbird’s legal, fiscal, and cultural home," but on the condition that the Thunderbird Council maintains a good working relationship with Mozilla leadership and that Thunderbird works to reduce its "operational and technical" reliance on Mozilla.

    As a first step toward operational independence, the Thunderbird Council has been soliciting donations from users, which Kewisch says has become "a strong revenue stream" that is helping to pay for servers and staff.

    Read 3 remaining paragraphs | Comments

    2017-05-10T15:43:17+00:00 Andrew Cunningham 2017-05-10T15:43:17+00:00 Ars Technica Turning on Anycast on B-Root

    The B-root operators have announced that they would enable IP anycast on 1 May 2017 [1]. In this article, we show how that change has been perceived by the RIPE Atlas probes, and if there were any transient effects of this change.

    2017-05-09T14:14:49+00:00 giovane_moura 2017-05-09T14:14:49+00:00 RIPE Labs Help! My team is full of junior programmers!

    Hiring is perhaps the most challenging thing that any manager can ever do. Getting it right is half skill, half luck. Making a good decision on a candidate can be the difference between moving the project forward and setting it back. So what happens when you’re hoping to hire mid-level or senior engineers, and you […]

    The post Help! My team is full of junior programmers! appeared first on

    2017-04-11T12:00:00+00:00 Brandon Savage 2017-04-11T12:00:00+00:00 Planet PHP PHP Versions Stats - 2017.1 Edition

    It's stats o'clock! See 2014, 2015, 2016.1 and 2016.2 for previous similar posts.

    A quick note on methodology, because all these stats are imperfect as they just sample some subset of the PHP user base. I look in the logs of the last month for Composer installs done by someone. Composer sends the PHP version it is running with in its User-Agent header, so I can use that to see which PHP versions people are using Composer with.

    PHP usage statistics May 2017 (+/- diff from November 2016)

    All versions Grouped PHP 5.6.30 14.73% PHP 7.0 36.12% (+1.11) PHP 7.0.15 9.53% PHP 5.6 31.44% (-6.02) PHP 5.5.9 6.12% PHP 7.1 17.64% (+16.28) PHP 7.0.17 6.00% PHP 5.5 10.61% (-8.32) PHP 7.1.3 5.88% PHP 5.4 3.11% (-2.29) PHP 7.1.4 3.65% PHP 5.3 0.98% (-0.62)

    A few observations: With a big boost of PHP 7.1 installs, PHP 7 overall now represents over 50%. 5.3/5.4 are really tiny and even 5.5 is dropping significantly which is good as it is not maintained anymore since last summer. That's a total of 85% of installs done on supported versions, which is pretty good.

    And because a few people have asked me this recently, while HHVM usage is not included above in the graph it is at 0.36% which is a third of PHP 5.3 usage and really hardly significant. I personally think it's fine to support it still in libraries if it just works, or if the fixes involved are minor. If not then it's probably not worth the time investment.

    Also.. since I now have quite a bit of data accumulated and the pie chart format is kind of crappy to see the evolution, here is a new chart which shows the full historical dataset!

    It is pretty interesting I think as it shows that 5.3/5.4/5.5 had people slowly migrating in bunches and the versions peaked at ~50% of the user base. On the other hand 5.6/7.0/7.1 peak around 35% which seems to indicate people are moving on faster to new versions. This is quite encouraging!

    PHP requirements in Packages

    The second dataset is which versions are required by all the PHP packages present on packagist. I only check the require statement in their current master version to see what the latest is.

    PHP Requirements - Current Master - May 2017 (+/- diff from November 2016)

    5.22.13% (-0.22) 5.337.6% (-3.65) 5.428.38% (-1.74) 5.517.11% (+0.13) 5.69.37% (+3.15) 7.04.61% (+1.53) 7.10.81% (+0.81)

    A few observations: This is as usual moving pretty slowly. I think I can give up trying to advise for change, it doesn't seem to be working.. On the other hand it looks like Symfony is going to use 7.0 or 7.1 for it's v4 to come out later this year, so hopefully that will shake things up a bit and make more libraries also realize they can bump to PHP 7.

    PHP Requirements - Recent Master - May 2017 (+/- diff from Current Master November 2016)

    In response to Nikita's comment below I ran the requirements numbers for packages that had some sort of commit activity over the last year. This excludes all stale/done packages and looks much more encouraging, but the difference points are probably overly large because they compare to the old numbers which included everything, therefore take those with a pinch of salt, and in the next six months update I'll have more trusty numbers.

    5.21.52% (-0.83) 5.323.15% (-18.1) 5.424.41% (-5.71) 5.523.7% (+6.72) 5.616.81% (+10.59) 7.08.73% (+5.65) 7.11.67% (+1.67)

    2017-05-07T14:00:00+00:00 Jordi Boggiano 2017-05-07T14:00:00+00:00 Planet PHP More Android phones than ever are covertly listening for inaudible sounds in ads

    Enlarge (credit: Arp et al.)

    Almost a year after app developer SilverPush vowed to kill its privacy-threatening software that used inaudible sound embedded into TV commercials to covertly track phone users, the technology is more popular than ever, with more than 200 Android apps that have been downloaded millions of times from the official Google Play market, according to a recently published research paper.

    As of January, there were 234 Android apps that were created using SilverPush's publicly available software developer kit, according to the paper, which was published by researchers from Technische Universitat Braunschweig in Germany. That represents a dramatic increase in the number of Android apps known to use the creepy audio tracking scheme. In April 2015, there were only five such apps.

    The apps silently listen for ultrasonic sounds that marketers use as high-tech beacons to indicate when a phone user is viewing a TV commercial or other type of targeted audio. A representative sample of just five of the 234 apps have been downloaded from 2.25 million to 11.1 million times, according to the researchers, citing official Google Play figures. None of them discloses the tracking capabilities in their privacy policies.

    Read 11 remaining paragraphs | Comments

    2017-05-05T15:14:27+00:00 Dan Goodin 2017-05-05T15:14:27+00:00 Ars Technica Measles outbreak rages after anti-vaccine groups target vulnerable community

    Enlarge / MINNEAPOLIS, MN - APRIL, 28: Lydia Fulton, LPN, administers the MMR vaccine to a child at Children's Primary Care Clinic. (credit: Getty | The Washington Post)

    Minnesota is experiencing its largest measles outbreak since the 1990s following a targeted and intense effort by anti-vaccine groups there to spread the false belief that vaccinations cause autism.

    As of Thursday, health officials reported 41 confirmed cases, nearly all unvaccinated children from a Somali immigrant community in Hennepin County. The community has for years been a target of anti-vaccine groups, aided by Andrew Wakefield, a fraudulent former physician.

    In the early 2000s, the large Somali immigrant population had high vaccination rates. But in 2008, fear that their children were suffering from higher rates of autism swept through the community. Though research later concluded that autism rates were not unusually high in the community, anti-vaccination activists pounced on the panic. The activists held community meetings and invited Wakefield to visit with scared families. Vaccination rates dropped from 92 percent in 2004 to 42 percent in 2014.

    Read 6 remaining paragraphs | Comments

    2017-05-05T18:10:57+00:00 Beth Mole 2017-05-05T18:10:57+00:00 Ars Technica EU reageert mild, maar Trump haalt feitelijk streep door privacy Europese burgers

    Terwijl de wereld na een dikke week over de nieuwe Amerikaanse president heen valt in verband met een omstreden inreisverbod, heeft Trump ook nog een decreet ondertekend dat wel eens vergaande gevolgen kan hebben voor diensten die Amerikaanse cloudproviders aanbieden in Europa. De vraag is vooral of Europese bedrijven en consumenten nog wel gebruik moeten […]

    2017-01-31T07:00:58+00:00 Jeroen Mulder 2017-01-31T07:00:58+00:00 Full MP3 support coming soon to Fedora

    Both MP3 encoding and decoding will soon be officially supported in Fedora. Last November the patents covering MP3 decoding expired and Fedora Workstation enabled MP3 decoding via the mpg123 library and GStreamer. This update allowed users with the gstreamer1-plugin-mpg123 package installed on their systems to listen to MP3 encoded music.

    The MP3 codec and Open Source have had a troubled relationship over the past decade, especially within the United States. Historically, due to licensing issues Fedora has been unable to include MP3 decoding or encoding within the base distribution. However, many users utilized 3rd party repositories to enable MP3 support.

    A couple of weeks ago IIS Fraunhofer and Technicolor terminated their licensing program and just a few days ago Red Hat Legal provided the permission to ship MP3 encoding in Fedora. There will be a bit of time whilst package reviews are carried out and tools that are safe to add are identified, as only MP3 is cleared and not other MPEG technologies. However, it will soon be possible to convert physical media or other formats to MP3 in Fedora without 3rd party repositories.

    2017-05-05T13:08:08+00:00 James Hogarth 2017-05-05T13:08:08+00:00 Fedora People Hans-Juergen Schoenig: Why favor PostgreSQL over MariaDB / MySQL

    For many years MySQL and PostgreSQL were somewhat competing databases, which still addressed slightly different audiences. In my judgement (which is of course a bit biased) PostgreSQL always served my professional users, while MySQL had a strong standing among web developers and so on. But, after Oracle took over MySQL I had the feeling that […]

    The post Why favor PostgreSQL over MariaDB / MySQL appeared first on Cybertec - The PostgreSQL Database Company.

    2017-04-25T09:45:58+00:00 2017-04-25T09:45:58+00:00 Planet PostgreSQL De-spamming service “Unroll” selling your inbox to Uber shows the importance of information hygiene, yet again
    Medical team preparing equipment for surgery  in operation room

    Privacy: It was a perfect service: sorting your mail and not just removing all spam for you, but also unsubscribing you from all of that spam garbage going forward. It kept your inbox perfectly clean. But behind the curtains, it also sold your inbox to the highest bidder.

    Sometimes, you’re maliciously signed up to tens of thousands of mailing lists because somebody was annoyed with something you said, somewhere. The cost of doing so is low and it causes a ton of headache as you’re getting hundreds of spam per minute. Fortunately, most of those are double-opt-in confirmation mails — “click this link to confirm the subscription” — but maybe five percent are not, and those malicious signups will continue to clobber your inbox with noise.

    Enter Unroll, which was the solution for this scenario: you gave it access to your mailbox, and it would not only detect and remove such unwanted spam, but also unsubscribe you from those tens of thousands of malicious subscriptions. Except, as it turns out, they also kept every single one of your mails, including those with passwords and other sensitive information, and sold them to the highest bidder.

    It was just a short passage in an otherwise fascinating portrait of the Uber CEO made by New York Times:

    New York Times quote

    So, the service Unroll was bought by Slice Intelligence. This is the first red flag: even if the service you signed up for were honest, their buyer may not be. (According to a quoted person below, Slice Intelligence bought Unroll specifically because they had access to tons of private mailboxes.)

    This highlights the importance of information hygiene.

    Information hygiene means that you’re aware not of what somebody claims to do with your data, but that you understand what they are able to do. For example, if a service promises to sort your email for you, then it necessarily must also be able to read all that email, for the action of sorting requires observation – and consequently, they are also able to sell your private mails to others. This is an ability they hold regardless of what they promise to do, or more relevantly, appear to promise to do.

    The act of sorting requires observation. Therefore, any service sorting your data must also be able to read all your data.

    In a blog post about the revelation that they sell inbox data, Unroll CEO states that “it was heartbreaking to see that some of our users were upset to learn about how we monetize our free service”. The comments are, predictably, furious: the top comment states that “this is a one-strike-I-leave-the-service kind of thing”.

    That same top comment also states that it’s a big deal to give somebody access to their inbox. Doing so should always, always, be done with the awareness that they will at least read all of it (if nothing else, to determine which mails to read closer, to perform the promised service), and that any information, once read, cannot be unread – but can be processed, aggregated, sold, et cetera.

    If you are providing your inbox to somebody else, and want privacy, you need to encrypt your mails, just like you’re encrypting your Internet connection to prevent others from eavesdropping on it.

    At Hacker News, a person named Karl Katzke elaborates further:

    I worked for a company that nearly acquired At the time, which was over three years ago, they had kept a copy of every single email of yours that you sent or received while a part of their service. Those emails were kept in a series of poorly secured S3 buckets. A large part of Slice buying was for access to those email archives. Specifically, they wanted to look for keyword trends and for receipts from online purchases.

    The founders of were pretty dishonest, which is a large part of why the company I worked for declined to purchase the company. As an example, one of the problems was how the founders had valued and then diluted equity shares that employees held. To make a long story short, there weren’t any circumstances in which employees who held options or an equity stake would see any money.

    I hope you weren’t emailed any legal documents or passwords written in the clear.

    Take a moment to absorb that, and add to the fact that they had a useful service that many subscribed to, combined with that sloppiness (not to say bordering on malice) with people’s private data – and sprinkle the CEO’s “heartbrokedness” when users learned how they made money on top.

    Last but not least, Unroll tries to deflect blame here by saying they’re only selling “anonymized” data. It must be remembered, that anonymization is hard. As in, really really really hard. Most data can be de-anonymized; strong anonymization is basically as hard as strong encryption, and most people doing anonymization are happy amateurs who do not understand the scope and difficulty of the task.

    Privacy remains your own responsibility.

    Syndicated Article
    This article has previously been published at Private Internet Access.

    (This is a post from Falkvinge on Liberty, obtained via RSS at this feed.)

    2017-05-03T18:00:07+00:00 Rick Falkvinge 2017-05-03T18:00:07+00:00 Falkvinge on Infopolicy Sven Hoexter: Chrome 58 ignores commonName in certificates

    People using Chrome might have already noticed that some internal certificates created without a SubjectAlternativeName extension fail to verify. Finally the Google Chrome team stepped forward, and after only 17 years of having SubjectAlternativeName as the place for FQDNs to verify as valid for a certificate, they started to ignore the commonName. See also

    Currently Debian/stretch still has Chromium 57 but Chromium 58 is already in unstable. So some more people might notice this change soon. I hope that everyone who maintains some broken internal scripting to maintain internal CAs now re-reads the OpenSSL Cookbook to finally fix this stuff. In general I recommend to base your internal CA scripting on easy-rsa to avoid making every mistake in certificate management on your own.

    2017-04-26T10:08:38+00:00 Sven Hoexter 2017-04-26T10:08:38+00:00 Planet Debian Orange UK Email Closure

    United Kingdom-based ISP Orange (now part of Everything Everywhere aka EE) has announced that they are shutting down their email service as of May 31, 2017. This affects users at these domains:,,,,,,,, This does not affect non-UK Orange email users.

    Follow this link for more details.

    This post first appeared on Al Iverson's Spam Resource.

    2017-04-26T15:33:00+00:00 Al Iverson 2017-04-26T15:33:00+00:00 Al Iverson's Spam Resource Cuba Getting Faster YouTube Access in Next 24 Hours, Thanks to Deal Signed in December

    In the December of last, Cuba singed a deal with Google to enable faster access to content served via its popular platforms such Gmail and YouTube. Under the deal, Cuba would gain access to a network of local servers called Google Global Cache that would reduce access time for content served via Google-owned sources. Today, Doug Madory, Dyn's Director of Internet Analysis, emailed to report that Google’s (Google Global Cache) GGC nodes have finally gone active in the past 24hrs. "It is a milestone as this is the first time an outside internet company has hosted anything in Cuba. Also, this is the first tangible development from Google's involvement in the country since wiring Kcho’s art studio with free WiFi"

    Also pointed out by Madory: If you drop this Cuban IP address [] into your browser, it will redirect you to Google’s homepage. This is one of the IPs ETECSA is using for the GGC service.

    Follow CircleID on Twitter

    More under: Access Providers, Web

    2017-04-26T19:54:00+00:00 CircleID Reporter 2017-04-26T19:54:00+00:00 CircleID Glow LEDs with Google Home

    Recently I tried experimenting with Google Home, trying to voice control LEDs. Majorly the whole thing can be split into two parts,

    1. A custom command that makes a web POST request to fetch the result.
    2. A simple Flask app that can receive post request with parameters and glow some LEDs based on the POST request data.

    For the part one, the custom commands were possible thanks to Google Actions Apis.  I used API.AI for my purpose since they had good documentation. I wont go into detail explaining the form fields in, they have done a good job with documentation and explaining part, I will just share my configurations screenshot for your quick reference and understanding. In the conversations are broken into intents.  I used one intent (Default Welcome Intent) and a followup intent (Default Welcome Intent – custom) for my application.


    Heres my first intent which basically greets the user and asks for a LED colour when the custom command “glow LEDs” is activated.


    As you can see the User says is what defines my command , you can add multiple statements in which you want to activate the command. The Action and Contexts is set when you create a followup Intent. Text response is the part which your Google Home will use as response.

    Next is the Followup Intent which basically takes the User response as input context (which is handled automatically when you create the followup intent) and looks for required parameters and tries to process the request.


    Here the expected User says would be a colour (red, blue, green) is what I allowed. In you can use their ML to process the speech and find your needed parameters and values. I needed colours hence used @sys.color. Their are other entities like @sys.address or @sys.flight etc. If these entities don’t serve your purpose then you might want to go vanilla and process the speech on your web-api end. The later part of the Followup Intent is a bit different, we are fulfilling the user request via web-hook here. Here the Response is the fallback response incase the web request fails, the success response is received from web-hook response body.


    The fulfilment option won’t be activated until you add your webhook in the Fulfillment section. Thats all for the part one. Also you can use Google Web Simulator to test your application On the Go.


    In part two , I used a Raspberry Pi, 3 LEDs (red, blue, green) , a 1K ohm resistor some wires, a breadboard(optional)  and a T-cobbler Board(optional). Now, we will write a flask application that will accept a post request and turn on the required GPIO pin output high/low.

    <script src=""></script>

    You can check with the request and response structure you need from the docs. Next, this application receives the calls from webhook and it triggers the targeted LED depending on the resolvedQuery. The above code was written so that I can test locally with get requests too. I used to tunnel and expose my flask application to the external world. Following is the circuit diagram for the connections.


    Following is the Result,

    <iframe allowfullscreen="true" class="youtube-player" height="480" src=";rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="854"></iframe>

    Some more Reads:



    2017-04-28T20:47:47+00:00 subho 2017-04-28T20:47:47+00:00 Fedora People My virtual living room: Setting up a social VR space in the house

    SEATTLE—The HTC Vive isn't like any computing device I've ever put in a home. This "room-scale" virtual-reality system is at the bleeding edge of what I'd call "home-appropriate"—meaning, it's pretty ornate and complicated, but not so much that you need to dedicate an entire lab or office space to it.

    Though you might assume that. Many question marks currently hover over the burgeoning VR industry, thanks to issues like high costs, required computing power, nausea potential, and an unproven field of early software. The Vive goes one step further by also asking its buyers to clear out some serious space so that they can walk across a room and feel fully transported to a game or app's impressive virtual space. The demands that Microsoft asked of Kinect buyers a few years ago are tame compared to the cleared floors and mounted motion trackers of HTC's dream future.

    Demand for space has been easy to shrug off at nearly a year of expo and convention demos, where game developers have done the setup legwork for us. We at Ars have spent less of our HTC Vive preview time sorting out logistics and more time letting our jaws drop to the floor. When it's hitting all cylinders, the SteamVR vision of room-scale VR is crazy-bonkers compelling. But what happens when VR dreams collide with the reality of installing and using one of these things in a home?

    Read 23 remaining paragraphs | Comments

    2016-03-13T14:30:06+00:00 Sam Machkovech 2016-03-13T14:30:06+00:00 Ars Technica A Ban on CD Ripping Marks This Year's Lowest Point in International Copyright: 2015 in Review

    2015 has been quite an interesting year for copyright law around the world—at least in the sense of that apocryphal Chinese curse, “may you live in interesting times.” That is to say that most of this year's copyright developments have been bad for users, but with one notable exception.

    To open on a broadly positive note, let's review that exception: the groundbreaking July resolution on copyright reform by the European Parliament, led by the long-suffering Member of the European Parliament (MEP) for the Pirate Party, Julia Reda. That resolution, although weaker than we would have liked, nevertheless sent several clear messages to the European Commission. Amongst these messages were that payment or permission should not be required before linking to websites or taking photographs of public buildings ("freedom of panorama"). On December 9, the Commission released a Copyright Communication (PDF) drawing on the Parliament's report, that foreshadowed the introduction of a few modest changes to European copyright law, including a few new European-wide copyright exceptions such as freedom of panorama, and text and data mining.

    Against this modest positive movement, we have to count the cost of a number of unfortunate losses elsewhere. Topping the list of these surely has to be the intolerable decision of the United Kingdom High Court in June to strike down the legalization of CD ripping, to which British lawmakers had only just agreed only after years of reviews and lobbying. As a result, if anyone in the United Kingdom is still buying physical CDs nowadays, they are breaking the law if they wish to transfer the contents onto their phone or computer. Could you imagine a decision more out of step with the digital age?

    Almost as bad were the unwarranted copyright term extensions that took place during 2015. These include Jamaica's surprise extension of copyright by 45 years to life plus 95 years, and Canada handing over an additional 20 years to music publishers and performers. Worse may be to come in 2016, if the Trans-Pacific Partnership (TPP) comes into force. This will force six countries across the Pacific Rim to extend their term of copyright to life plus 70 years; a term that economists unanimously agree is unreasonably long to achieve its stated purpose of providing an incentive to creators. But none of these hold a candle to South Africa's proposal to create an unlimited term of copyright as a component of its proposed new orphan works regime—a proposal we hope will be dropped in the final law.

    There were also a number of adverse moves to harshen copyright enforcement measures around the world. South Africa was a late entrant in this category, with an ill-advised bill that would criminalize almost any online copyright infringement. But sweeping the field for this year's worst new enforcement measures is Australia. In short order it brought in a data retention law, a law to block copyright-infringing websites (which has already begun to be misused), as well as a three-strikes warning system for alleged infringing downloaders.

    We can only hope that copyright law changes in 2016 will be more positive for users, or at the very least, a bit less interesting.

    This article is part of our Year In Review series; read other articles about the fight for digital rights in 2015. Like what you're reading? EFF is a member-supported nonprofit, powered by donations from individuals around the world. Join us today and defend free speech, privacy, and innovation.

    Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
    2015-12-25T23:39:25+00:00 Jeremy Malcolm 2015-12-25T23:39:25+00:00 Deeplinks Nikhil Sontakke: Virtual IP Failover for PostgreSQL in AWS


    We at SecureDB rely on the highly rated open source enterprise-class PostgreSQL database. In an earlier blog post, we laid down the reasons as to why SecureDB chose PostgreSQL as its backend of choice.


    SecureDB provides Encrypted User Identity Management via RESTful APIs and the cloud-version of our offering is hosted on AWS. We take high availability very seriously and along with all the other goodies of regular logical and physical backups (we back them into S3 and also offline), we configured our PostgreSQL backend with streaming replication.


    To seamlessly allow for failover in case the master goes down, we used a virtual IP Address failover mechanism for PostgreSQL in our AWS environment. The idea is that the failover node or another third “witness” node can monitor the master instance to which the virtual IP address is assigned to and in case the master node becomes unreachable the slave node can take over the virtual IP address after ensuring that it has been promoted successfully to become the new master.

    Virtual IP failover scenario

    Majority of the inspiration came from an AWS article. But the major difference is that this article talks about instances in the public subnet of the VPC. In our case, for security concerns and also because of our architecture, the PostgreSQL instances were part of a private subnet with no direct access to the outside world. So we employed the same technique for our private subnet based instances.


    Without going into too much of AWS details about VPC and Security Groups (there’s plenty of documentation on the AWS website for this), create both the “Master” and “Slave” instances in your private subnet.


    While creating “Master”, assign let’s say private IP “” and secondary private IP ““. This secondary private IP will be virtual IP which will be floating between the master and the slave instances:

    Private IP and Secondary Private IP of an AWS instance in a Private Subnet

    Private IP and Secondary Private IP of an AWS instance in a Private Subnet


    While creating “Slave”, assign a private IP “” for example.


    Please take care to ensure that the virtual IP “” can be assigned to either of the instances properly. We use Centos 6.5 and had to configure the proper entries in /etc/sysconfig/network-scripts on both the VMs to allow assignment of this virtual IP. Here’s how a sample config looks on both “Master” and “Slave” instances:

    cat /etc/sysconfig/network-scripts/ifcfg-eth0:0


    Now both the instances are ready to handle virtual IP re-assignments via AWS cli. Please configure PostgreSQL streaming replication on the “Slave” node in the standard way after this.


    We then need to come up with a monitor script. Again AWS has provided a virtual IP failover script which can be customized as per one’s requirements. We need to setup proper values for the variables in the shell script. The notable ones are:


    HA_Node_IP – This should point to HA Node #2’s primary private IP address ( in this example).

    VIP – This should point to private virtual IP address that will float between the two HA Nodes ( in this example).

    REGION – This should point to region where your HA nodes are running (us-east-1 in this example).


    We modified this script to allow temporary ping failures. Only if the ping fails consecutively for 3 or more times do we issue the virtual IP failover command to fail the IP over to the “Slave” instance. It also additionally emails us after each ping failure to let us know about the issue at hand. Here’s a sample snippet from our script:

    if [ $FAIL_CNT -lt 3 ]
        echo "HA script. Master unreachable Rechecking after $SLEEP_SECS seconds. Try count $FAIL_CNT. Limit 3" | mailx -r -s "HA Script"  -v
        sleep $SLEEP_SECS
    echo `date` "-- HA heartbeat failed, taking over VIP"
    aws ec2 assign-private-ip-addresses --network-interface-id $ENI_ID --private-ip-address $VIP --allow-reassignment


    That’s it! Ensure that your virtual IP is assigned and reachable from the “Master” node and run the above script forever from the “Slave” node. You could also run this script from another “Witness” node if desired. Since it uses the AWS cli to make the IP re-assignment, the virtual IP gets re-assigned appropriately and AWS updates its internal routing metadata correctly allowing for routing of queries to the newly promoted replica on the “Slave” node. Cannot over-emphasize the importance of testing all scenarios before deploying it in your production environment! We had to resort to this virtual IP based failover once so far in the last year. Obviously, you cannot use this across multiple regions, but it works well enough if you want to do HA within your region/zone. Hope it helps you as well!

    The post Virtual IP Failover for PostgreSQL in AWS appeared first on SecureDB.

    2015-12-07T04:59:48+00:00 2015-12-07T04:59:48+00:00 Planet PostgreSQL Man heeft beter richtingsgevoel dan de vrouw

    Maar een druppeltje testosteron onder de tong zorgde dat vrouwen meer gingen navigeren zoals mannen dat doen. Dat blijkt uit Noors onderzoek. Wetenschappers verzamelden achttien mannen en achttien vrouwen en gaven ze een uur de tijd om de plattegrond van een virtueel doolhof te leren kennen. Daarna namen de proefpersonen plaats in een MRI-scanner. Met […]

    Lees het volledige bericht (Man heeft beter richtingsgevoel dan de vrouw) op

    2015-12-07T07:56:03+00:00 Caroline Kraaijvanger 2015-12-07T07:56:03+00:00 Wifi-tracking rond winkels in strijd met de wet?

    Het College bescherming persoonsgegevens heeft een van de grote aanbieders van wifi-tracking in Nederland op de vingers getikt, las ik bij Tweakers. Het bedrijf Bluetrace zou ten behoeve van wifi analytics mensen in en rond winkels via het wifi-signaal van hun smartphone volgen zonder hen daarover goed te informeren. In haar rapport meldt de toezichthouder dat er meer verzameld werd dan nodig, dat de gegevens te lang bewaard werden en dat ook mensen op de openbare weg gevolgd werden.

    Wifi tracking is al een paar jaar in populariteit aan het toenemen. Kort gezegd meet een winkel dan langskomende wifi-signalen uit mobiele telefoons, om vervolgens te registreren waar die signalen heen gaan, hoe lang de telefoon ergens stilstaat et cetera. Via het MAC-adres zijn de metingen uniek te correleren aan één telefoon.

    Het argument is dan altijd, dat is niet erg want het is maar je telefoon en ze weten niet hoe je heet. Maar het Cbp is daar snel klaar mee: dit zijn persoonsgegevens (want herleidbaar tot de eigenaar van de telefoon) en locatie-informatie is best wel gevoelig:

    Omdat smartphones en tabletcomputers onlosmakelijk verbonden zijn met hun eigenaren, leveren de verplaatsingen van de apparaten een zeer intieme inkijk in het leven van hun eigenaren.

    Dat je de náám niet weet van de persoon, doet er niet toe. Het is de vaste opvatting van de privacytoezichthouders dat het erom gaat dat de “gegevens via nadere stappen in verband kunnen worden gebracht met een bepaalde persoon.” Kort gezegd gaat het erom dat de gegevens over één persoon gaan.

    Vervolgens moet Bluetrace maar aantonen waarom zij dit mag onder de Wbp. De enige reële optie is die van de eigen dringende noodzaak: het belang om de gegevens te verzamelen weegt zwaarder dan het privacybelang van de langslopende smartphone-eigenaren, én er is alles aan gedaan om de privacy zo veel mogelijk te waarborgen.

    Op zich is het een legitiem belang om te willen weten hoe mensen door je winkel lopen. Maar de vraag wordt dan, moet dat dan wel op deze manier gemeten worden, kan het niet een onsje minder? Bijvoorbeeld door tijdens sluitingstijd de sensoren uit te zetten, of door in plaats van ruwe MAC-adressen hashes te bewaren die na 48 uur worden gewist.

    Ook een probleem is dat men MAC-adressen vastlegt van mensen die langs de winkel lopen maar niet naar binnen gaan. “Ohooh een verwerking aan de openbare weg”, en dat mag dan categorisch niet. Nou nee, het ligt iets subtieler: je mag wel persoonsgegevens verzamelen aan de openbare weg maar je moet daar dan wel een specifiek belang voor hebben dat opweegt tegen dat privacybelang. En dat zal er zelden zijn. Welk legitiem belang ís er om te meten wie er over de weg loopt?

    Verder ontbrak het aan adequate informatie over wat men doet met persoonsgegevens en hoe je als winkelbezoeker informatie kunt krijgen over deze gegevensvergaring.

    Kort samengevat: leuk idee, mislukte uitvoering. Want het Cbp zegt letterlijk dat het zeker mogelijk is om legaal persoonsgegevens van smartphones te verzamelen om winkelbezoek te monitoren, mits je dat maar een stukje zorgvuldiger inkleedt. De pers maakt veel van het hashen van de MAC-adressen, maar dat is niet de kern: het gaat erom zo min mogelijk gegevens te verzamelen en dingen zo snel mogelijk weer weg te gooien.


    2015-12-02T07:18:29+00:00 Arnoud Engelfriet 2015-12-02T07:18:29+00:00 Internetrecht: actualiteiten en commentaar The PowerDNS Spring Cleaning

    Hi everybody,

    In this post we’d like to update you on what has been achieved in the development of the PowerDNS 4.x products: Authoritative, Recursor and dnsdist. First, I am very proud that we managed to do a “spring cleaning”. As mentioned earlier, over time any software project picks up so called “technical debt“. Things that looked good at the time, shortcuts to quickly get to a new feature, whole features that weren’t fully thought through: it all piles up.

    It is very wise for any piece of software to periodically take a step back and do a cleanup, but it rarely happens. Some projects do go for the “grand redesign”, but this frequently does not lead to a product that is actually better in production (“Second system effect“).

    We’re grateful for our customers and users that allowed us the time for the cleanup, and happy that the PowerDNS community itself too had the discipline to work on these invisible improvements. It is always more fun to add new features than to break down things that were working to make them better!

    So what happened under the hood? Quite a lot of things.

    C++ 2011

    PowerDNS is mostly written in C++, and since 2011 this monumental language has been available in a new revision. C++ 2011 makes life a lot easier on programmers, which in turn means you can deliver more functionality in the same time, or perhaps the same functionality but then with higher quality.

    C++ 2011 has merged a lot of functionality from the equally monumental Boost libraries, and we’ve taken the opportunity to move PowerDNS to the ‘native’ versions of these functionalities: range-for instead of BOOST_FOREACH, std::shared_ptr instead of boost::shared_ptr, std::to_string instead of boost::lexical_cast, for example.

    Taking this step required revamping our entire build & regression testing infrastructure because the environments on which we test did not support the recent versions of the C++ compilers and libraries we required.


    Within PowerDNS hid a sin. DNS names frequently look like “ascii strings”. But they are anything but. DNS names compare case insensitively. Also, there is the issue of the trailing dot. “” and “” are the same from a DNS perspective. Life becomes even more complicated when we realize that DNS names are ‘8-bit clean’. You can put any binary string in DNS and it should work. But how do we encode “some”? As some\ Some\

    There is only one worthy answer to these questions: we don’t. DNS is internally not stored as ascii but as a series of labels with specified lengths. So “” is stored as the value 3, followed by www, followed by the value 8, followed by powerdns, followed by 3, by com and finally the zero value. This is the right way to store DNS names.

    To achieve this, we wrote the DNSName class which stores DNS values in this way, and also offers ways to parse DNSNames straight from packets, and to output them in “human friendly” form. Over the course of 4.0 development, DNSName got reimplemented a few times as we learned more, finally taking a shape where we could do canonical ordering very fast. This gave us the benefit of cleaning up a lot of ugly reversal code, and allowing all relevant caches to be purged not just name-by-name, but for whole zones in one go.

    Finally, we’ve equipped DNSName with many methods that are useful in a DNS context like isPartOf() and chopOff(), removing lots of redundant code from PowerDNS.

    Ridding PowerDNS from “DNS Names as ASCII” was a monumental undertaking that would not have been possible without extensive help from the community, specifically Kees Monshouwer and Aki Tuomi.


    On a related note, for a long time, the PowerDNS Recursor showed its heritage as a spin-off from PowerDNS Authoritative Server. In the Authoritative Server, backends store DNS details as ASCII. So to encode the AAAA record 2001:888:2000:1d::2, we actually have that string “2001:888:2000:1d::2” of 19 characters in the database. This is not the most efficient thing to do, but for databases it is ok. They are good with strings.

    However, INTERNALLY, it makes very little sense to drag these ASCII representations around and convert them into binary addresses and back again for processing. This is the sort of technical debt you build up over 15 years.

    With great effort, we’ve been able to purge the PowerDNS Recursor of the DNSResourceRecord struct which carried those ASCII strings, and move everything to the DNSRecord class. We’ve checked, no bits of ASCII are hurt when answering questions in the Recursor anymore!

    Netmask & Domain trees

    Frequently, PowerDNS products need to check an IP address or domain name against a long list of possible matches. Based on the DNSName, ComboAddress and Netmask classes, Aki Tuomi and we have built generic structures that allow for high speed lookups of domain names & IP addresses against domain suffixes and netmasks, using Patricia Tries. These rapid lookups are now used within all three PowerDNS products, and are also available from Lua.

    We can now safely disclose that our previous method for testing an IP address against many netmasks consisted of trying each one in order. Sorry for that.

    Malloc Tracer

    C++ is a wonderful language, we think, especially if you don’t turn it into a circus. A big problem of C++ however is the astounding amount of memory allocations that happen under the hood. And while memory allocators have gotten better over the years, we discovered we were doing hundreds of mallocs per packet in some circumstances.  We’ve found that Heaptrack was helpful in getting a statistical overview of where our allocations were coming from, but we got very high per-packet precision using a very simple built-in (optional, turned off by default) malloc tracer. Using this, we’ve been able to reduce the allocation traffic by over 60% so far.

    As an example, in the common case, the PowerDNS Recursor will now only issue two small mallocs per packet.

    Configuration State

    We try and succeed in getting a performance boost out of using multiple CPUs. This is not straightforward. No two threads can alter the same memory simultaneously, or bad things would happen. The easy solution against that is to lock the data. It turns out that when you sprinkle locks all over your code, any performance boost is gone. Performance might in fact go down with more threads.

    A common case however is where a piece of memory rarely if ever changes and we can spare the memory to give each thread its own copy to read from. Only if something changed in the ‘master’ should each thread get a new copy. This idea is roughly what is known as Read Copy Update within system programming, and is for example also used by the great Knot DNS server. We created a set of classes called ‘State Holders‘ to bring this technology to PowerDNS as well.

    Package Builder, Repositories

    Although we loved Jenkins and we are mostly happy with Travis, we have gotten a lot of power from our ‘buildbot’ building engines. We now build PowerDNS for more and more platforms automatically, and push out those packages to our repositories on These repositories allow you to install the latest and greatest builds of PowerDNS using apt or yum, and get native packages. This makes testing 4.0 really easy.

    So what did we do with all those improvements?

    Once this better new infrastructure was in place, we’ve implemented many new things:

    • RPZ aka Response Policy Zone, as outlined here
    • IXFR slaving in the PowerDNS Recursor for RPZ
    • DNSSEC processing in Recursor (authoritative has had this for years)
    • DNSSEC validation
    • EDNS Client Subnet support in PowerDNS Recursor (authoritative has had this for years)
    • GEOIP backend supporting custom netmasks, “fields” in answers
    • Newly revived ODBC backend for talking to Microsoft SQL Server & Azure
    • Lua asynchronous queries for per-IP/per-domain status
    • Caches that can now be wiped per whole zone instead of per name
    • An astounding amount of dnsdist features (check out the movie & the presentation!)
    • And much more

    We’ll outline what is in 4.x more completely in an upcoming post, including details on when and how it will be released. For now, it may be good to know you can test these new features via the package builder and repo service as outlined above.

    Good luck!

    2015-11-28T15:48:01+00:00 berthubert 2015-11-28T15:48:01+00:00 Published articles Security Bug in Dell PCs Shipped Since 8/15

    All new Dell laptops and desktops shipped since August 2015 contain a serious security vulnerability that exposes users to online eavesdropping and malware attacks. Dell says it is prepping a fix for the issue, but experts say the threat may ultimately need to be stomped out by the major Web browser makers.

    d3llAt issue is a root certificate installed on newer Dell computers that also includes the private cryptographic key for that certificate. Clever attackers can use this key from Dell to sign phony browser security certificates for any HTTPS-protected site.

    Translation: A malicious hacker could exploit this flaw on open, public networks (think WiFi hotspots, coffee shops, airports) to impersonate any Web site to a Dell user, and to quietly intercept, read and modify all of a vulnerable Dell system’s Web traffic.

    According to Joe Nord, the computer security researcher credited with discovering the problem, the trouble stems from a certificate Dell installed named “eDellRoot.”

    Dell says the eDellRoot certificate was installed on all new desktop and laptops shipped from August 2015 to the present day. According to the company, the certificate was intended to make it easier for Dell customer support to assist customers in troubleshooting technical issues with their computers.

    “We began loading the current version on our consumer and commercial devices in August to make servicing PC issues faster and easier for customers,” Dell spokesperson David Frink said. “When a PC engages with Dell online support, the certificate provides the system service tag allowing Dell online support to immediately identify the PC model, drivers, OS, hard drive, etc. making it easier and faster to service.”

    “Unfortunately, the certificate introduced an unintended security vulnerability,” the company said in a written statement. “To address this, we are providing our customers with instructions to permanently remove the certificate from their systems via direct email, on our support site and Technical Support.”

    In the meantime, Dell says it is removing the certificate from all Dell systems going forward.

    “Note, commercial customers who image their own systems will not be affected by this issue,” the company’s statement concluded. “Dell does not pre-install any adware or malware. The certificate will not reinstall itself once it is properly removed using the recommended Dell process.”

    The vulnerable certificate from Dell. Image: Joe Nord

    The vulnerable certificate from Dell. Image: Joe Nord

    It’s unclear why nobody at Dell saw this as a potential problem, especially since Dell’s competitor Lenovo suffered a very similar security nightmare earlier this year when it shipped an online ad tracking component called Superfish with all new computers.

    Researchers later discovered that Superfish exposed users to having their Web traffic intercepted by anyone else who happened to be on that user’s local network. Lenovo later issued a fix and said it would no longer ship computers with the vulnerable component.

    Dell’s Frink said the company would not divulge how many computers it has shipped in the vulnerable state. But according to industry watcher IDC, the third-largest computer maker will ship a little more than 10 million computers worldwide in the third quarter of 2015.

    Zakir Durumeric, a Ph.D. student and research fellow in computer science and engineering at the University of Michigan, helped build a tool on his site — — which should tell Dell users if they’re running a vulnerable system.

    Durumeric said the major browser makers will most likely address this flaw in future updates soon.

    “My guess is this has to be addressed by the browser makers, and that we’ll seem them blocking” the eDellRoot certificate. “My advice to end users is to make sure their browsers are up-to-date.”

    Further reading:

    An in-depth discussion of this issue on Reddit.

    Dan Goodin‘s coverage over at Ars Technica.

    Dell’s blog advisory.

    Update, 1:15 a.m. ET: Added link to Dell’s instructions for removing the problem.

    2015-11-24T05:44:36+00:00 BrianKrebs 2015-11-24T05:44:36+00:00 Krebs on Security Hamster rediscovered

    It’s so normal that you sometimes don’t speak about it - but …

    If you like to track your time in a fine granular way, consider to use project-hamster with the GNOME Shell extension.

    On fedora run:

    pkcon install hamster-time-tracker

    Then visit the hamster extension page and install the hamster plugin. The rest is pretty self explanatory.

    2015-11-24T07:02:26+00:00 Fabian Deutsch 2015-11-24T07:02:26+00:00 Fedora People SIEM is not a product, its a process..., (Fri, Nov 20th)

    Thisfamous Bruces quote is so true that we can re-use it to focus on specific topics likeSIEM (Security Information and Event Management). Many organizations already deployed solutions to process their logs and to generate (useful - I hope) alerts. The market is full of solutions that can perform more or less a good job. But the ROI of your toolwill be directly related to the processes that you implement next tothe hardware and software components.Ill give you two examples.

    The first one is the implementation of a mandatory strong change management procedure. Recently, I faced this story at a customer. I call this the green status effect: Ifthe security monitoring tool does not report alerts and and you assume thateverything seems running fine, youll fail!Becauseyour SIEM quality is directly dependingon the quality of the data send to it. Within the customer infrastructure, some critical devices were moved to a new VLAN (new IP addresses assigned to them) but the configuration of the collector was not changed to reflect this important change. Events being sent to a rsyslog instance and split based on the source IP address, the new events were not properly collected. They lost many alerts!

    The second example focus on assets management. Many SIEM vendors propose compliancy packages (PCI, HIPAAS, SOX - name your favorite one). The marketing message behind those packages is be compliant out of the box"> if the target is not : known as a regular destination from the DMZ OR known as a trusted target OR known as a cardholder targetAND IF the destination port is not known as allowed (via an Active List)AND IF the traffic is not coming from a VPN deviceAND IF the traffic is not coming from a SIEM deviceAND IF the source is flagged as an attacker from the DMZ

    Based on this rule, we must:

    • Define trusted hosts
    • Define cardholder hosts
    • Define the list of allowed ports
    • Categorize the VPN, SIEM devices

    This means that to make this rule effective, there is a huge classification job to perform to fill the SIEM with relevant data (again!).Deploying a SIEM is not just a one shot process. Youve to carefully implement procedures!

    • New devices must be provisioned in the SIEM configuration
    • Changes must be reflected in the SIEM configuration.
    • Implement controls to detect unusual behavior (waiting for alerts is not enough)

    Happy logging!

    Xavier Mertens
    ISC Handler - Freelance Security Consultant
    PGP Key

    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License. ]]>
    2015-11-20T11:14:03+00:00 2015-11-20T11:14:03+00:00 Published articles Jonathan McDowell: Updating a Brother HL-3040CN firmware from Linux

    I have a Brother HL-3040CN networked colour laser printer. I bought it 5 years ago and I kinda wish I hadn’t. I’d done the appropriate research to confirm it worked with Linux, but I didn’t realise it only worked via a 32-bit binary driver. It’s the only reason I have 32 bit enabled on my house server and I really wish I’d either bought a GDI printer that had an open driver (Samsung were great for this in the past) or something that did PCL or Postscript (my parents have an Xerox Phaser that Just Works). However I don’t print much (still just on my first set of toner) and once setup the driver hasn’t needed much kicking.

    A more major problem comes with firmware updates. Brother only ship update software for Windows and OS X. I have a Windows VM but the updater wants the full printer driver setup installed and that seems like overkill. I did a bit of poking around and found reference in the service manual to the ability to do an update via USB and a firmware file. Further digging led me to a page on resurrecting a Brother HL-2250DN, which discusses recovering from a failed firmware flash. It provided a way of asking the Brother site for the firmware information.

    First I queried my printer details:

    $ snmpwalk -v 2c -c public hl3040cn.local iso.
    iso. = STRING: "MODEL=\"HL-3040CN series\""
    iso. = STRING: "SERIAL=\"G0JXXXXXX\""
    iso. = STRING: "SPEC=\"0001\""
    iso. = STRING: "FIRMID=\"MAIN\""
    iso. = STRING: "FIRMVER=\"1.11\""
    iso. = STRING: "FIRMID=\"PCLPS\""
    iso. = STRING: "FIRMVER=\"1.02\""
    iso. = STRING: ""
    iso. = STRING: ""
    iso. = STRING: ""
    iso. = STRING: ""
    iso. = STRING: ""
    iso. = STRING: ""
    iso. = STRING: ""
    iso. = STRING: ""
    iso. = STRING: ""

    I used that to craft an update file which I sent to Brother via curl:

    curl -X POST -d @hl3040cn-update.xml -H "Content-Type:text/xml" --sslv3

    This gave me back some XML with a URL for the latest main firmware, version 1.19, filename LZ2599_N.djif. I downloaded that and took a look at it, discovering it looked like a PJL file. I figured I’d see what happened if I sent it to the printer:

    cat LZ2599_N.djf | nc hl3040cn.local 9100

    The LCD on the front of printer proceeded to display something like “Updating Program” and eventually the printer re-DHCPed and indicated the main firmware had gone from 1.11 to 1.19. Great! However the PCLPS firmware was still at 1.02 and I’d got the impression that 1.04 was out. I didn’t manage to figure out how to get the Brother update website to give me the 1.04 firmware, but I did manage to find a copy of LZ2600_D.djf which I was then able to send to the printer in the same way. This led to:

    $ snmpwalk -v 2c -c public hl3040cn.local iso.
    iso. = STRING: "MODEL=\"HL-3040CN series\""
    iso. = STRING: "SERIAL=\"G0JXXXXXX\""
    iso. = STRING: "SPEC=\"0001\""
    iso. = STRING: "FIRMID=\"MAIN\""
    iso. = STRING: "FIRMVER=\"1.19\""
    iso. = STRING: "FIRMID=\"PCLPS\""
    iso. = STRING: "FIRMVER=\"1.04\""
    iso. = STRING: ""
    iso. = STRING: ""
    iso. = STRING: ""
    iso. = STRING: ""
    iso. = STRING: ""
    iso. = STRING: ""
    iso. = STRING: ""
    iso. = STRING: ""
    iso. = STRING: ""

    Cool, eh?

    [Disclaimer: This worked for me. I’ve no idea if it’ll work for anyone else. Don’t come running to me if you brick your printer.]

    2015-11-21T13:27:56+00:00 Jonathan McDowell 2015-11-21T13:27:56+00:00 Planet Debian ACM start campagne aangaande betalen in webshops

    Via Consuwijzer is de Autoriteit Consument & Markt (ACM) een campagne gestart die consumenten bewust moet maken voor het feit hoe zij betalen in een webshop. Het idee is dat consumenten soms erg gemakkelijk vooruit betalen en vervolgens hun producten niet krijgen en dan kunnen fluiten naar hun geld en bestelling.

    Wij hebben regelmatig geschreven over dat een webshop in beginsel een mogelijkheid moet bieden die inhoudt dat de klant pas bij of na aflevering (minimaal 50% van bedrag) kan betalen. Hiervoor zijn diverse opties, zoals rembours, een partij als Afterpay of zelf later een factuur sturen.

    Veel met name startende webwinkels kennen deze regel niet en bieden vaak alleen iDeal aan, wat dus onvoldoende is. Shops met duurdere producten vinden het risico vaak te groot en eisen ook volledige vooruitbetaling. Vanuit het oogpunt van de consument is dit risico natuurlijk net zo groot, als er bijvoorbeeld niet geleverd wordt en er wel betaald is. Dus derhalve valt zeker wat voor de campagne te zeggen. Dit laatste wordt door de ondernemers wel eens vergeten.

    Wat ik positief vind aan de campagne is dat er vooral over verzekerde betaalmethodes wordt gesproken (en minder over de achter afbetaalverplichting), om meer zekerheid te hebben als klant. Dus niet  persé: “kies altijd voor achteraf betalen”. Denk daarbij aan creditcards maar zeker ook aan Paypal. Een gevolg van de campagne kan echter wel zijn dat dit meer een speerpunt van de ACM wordt en er op het punt van betaling meer gehandhaafd gaat worden.

    Punt van kritiek blijft wel dat de achteraf betaalregel iets echt Nederlands is en dat de meeste andere landen dit niet kennen. Daar is het meer aan de consument om te bepalen of hij iets wil bestellen bij een shop die enkel vooruitbetaling biedt. Vertrouw je het niet, dan bestel je niet. Op zich geen verkeerde gedachte. Het verschil in regelgeving is lastig, omdat veel shop graag internationaal willen handelen en daarin ook overal dezelfde regels wensen.

    Gerelateerde artikelen

    2015-11-20T08:00:42+00:00 Maarten Braun 2015-11-20T08:00:42+00:00 ICTRecht Timo Jyrinki: Converting an existing installation to LUKS using luksipc

    This is a burst of notes that I wrote in an e-mail in June when asked about it, and I'm not going to have any better steps since I don't remember even that amount as back then. I figured it's better to have it out than not.

    So... if you want to use LUKS In-Place Conversion Tool, the notes below on converting a shipped-with-Ubuntu Dell XPS 13 Developer Edition (2015 Intel Broadwell model) may help you. There were a couple of small learnings to be had...
    The page itself is good and without errors, although funnily uses reiserfs as an example. It was only a bit unclear why I did save the initial_keyfile.bin since it was then removed in the next step (I guess it's for the case you want to have a recovery file hidden somewhere in case you forget the passphrase).

    For using the tool I booted from a 14.04.2 LTS USB live image and operated there, including downloading and compiling luksipc in the live session. The exact reason of resizing before luksipc was a bit unclear to me at first so I simply indeed resized the main rootfs partition and left unallocated space in the partition table.

    Then finally I ran ./luksipc -d /dev/sda4 etc.

    I realized I want /boot to be on an unencrypted partition to be able to load the kernel + initrd from grub before entering into LUKS unlocking. I couldn't resize the luks partition anymore since it was encrypted... So I resized what I think was the empty small DIAGS partition (maybe used for some system diagnostic or something, I don't know), or possibly the next one that is the actual recovery partition one can reinstall the pre-installed Ubuntu from. And naturally I had some problems because it seems vfatresize tool didn't do what I wanted it to do and gparted simply crashed when I tried to use it first to do the same. Anyway, when done with getting some extra free space somewhere, I used the remaining 350MB for /boot where I copied the rootfs's /boot contents to.

    After adding the passphrase in luks I had everything encrypted etc and decryptable, but obviously I could only access it from a live session by manual cryptsetup luksOpen + mount /dev/mapper/myroot commands. I needed to configure GRUB, and I needed to do it with the grub-efi-amd64 which was a bit unfamiliar to me. There's also grub-efi-amd64-signed I have installed now but I'm not sure if it was required for the configuration. Secure boot is not enabled by default in BIOS so maybe it isn't needed.

    I did GRUB installation – I think inside rootfs chroot where I also mounted /dev/sda6 as /boot (inside the rootfs chroot), ie mounted dev, sys with -o bind to under the chroot (from outside chroot) and mount -t proc proc proc too. I did a lot of trial and effort so I surely also tried from outside the chroot, in the live session, using some parameters to point to the mounted rootfs's directories...

    I needed to definitely install cryptsetup etc inside the encrypted rootfs with apt, and I remember debugging for some time if they went to the initrd correctly after I executed mkinitramfs/update-initramfs inside the chroot.

    At the end I had grub asking for the password correctly at bootup. Obviously I had edited the rootfs's /etc/fstab to include the new /boot partition, I changed / to be "UUID=/dev/mapper/myroot /     ext4    errors=remount-ro 0       ", kept /boot/efi as coming from the /dev/sda1 and so on. I had also added "myroot /dev/sda4 none luks" to /etc/crypttab. I seem to also have GRUB_CMDLINE_LINUX="cryptdevice=/dev/sda4:myroot root=/dev/mapper/myroot" in /etc/default/grub.

    The only thing I did save from the live session was the original partition table if I want to revert.

    So the original was:

    Found valid GPT with protective MBR; using GPT.
    Disk /dev/sda: 500118192 sectors, 238.5 GiB
    Logical sector size: 512 bytes
    First usable sector is 34, last usable sector is 500118158
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 6765 sectors (3.3 MiB)
    Number  Start (sector)    End (sector)  Size       Code  Name
    1            2048         1026047   500.0 MiB   EF00  EFI system partition
    2         1026048         1107967   40.0 MiB    FFFF  Basic data partition
    3         1107968         7399423   3.0 GiB     0700  Basic data partition
    4         7399424       467013631   219.2 GiB   8300
    5       467017728       500117503   15.8 GiB    8200

    And I now have:

    Number  Start (sector)    End (sector)  Size       Code  Name

    1            2048         1026047   500.0 MiB   EF00  EFI system partition
    2         1026048         1107967   40.0 MiB    FFFF  Basic data partition
    3         1832960         7399423   2.7 GiB     0700  Basic data partition
    4         7399424       467013631   219.2 GiB   8300
    5       467017728       500117503   15.8 GiB    8200
    6         1107968         1832959   354.0 MiB   8300

    So it seems I did not edit DIAGS (and it was also originally just 40MB) but did something with the recovery partition while preserving its contents. It's a FAT partition so maybe I was able to somehow resize it after all.

    The 16GB partition is the default swap partition. I did not encrypt it at least yet, I tend to not run into swap anyway ever in my normal use with the 8GB RAM.

    If you go this route, good luck! :D
    2015-11-20T14:14:00+00:00 Timo Jyrinki 2015-11-20T14:14:00+00:00 Planet Debian Verizon Retiring Inactive Email Addresses

    Oracle Marketing Cloud's Kevin Senne reports that Verizon is telling subscribers that email addresses unaccessed for 180 days will be deleted. This is yet another reminder that you (senders) can't sit on email addresses for years and still expect them to be valid. I have no clue if Verizon ever repurposes dead email addresses into spamtraps, but it's not something I would want to risk.

    This post first appeared on Al Iverson's Spam Resource.

    2015-11-20T18:10:39+00:00 Al Iverson 2015-11-20T18:10:39+00:00 Al Iverson's Spam Resource Nederland volledig op groene energie laten draaien: het kan!

    Wetenschappers zetten uiteen waar de energie vandaan moet komen als Nederland in 2050 volledig op groene energie wil draaien. Het is een vraagstuk waar politici zich de komende weken tijdens de klimaattop in Parijs het hoofd over breken: hoe kunnen we voorkomen dat de opwarming van de aarde uit de hand loopt? Eén antwoord ligt […]

    Lees het volledige bericht (Nederland volledig op groene energie laten draaien: het kan!) op

    2015-11-20T18:12:35+00:00 Caroline Kraaijvanger 2015-11-20T18:12:35+00:00 Dealing with blocklists, deliverability and abuse people

    There are a lot of things all of us in the deliverability, abuse and blocklist space have heard, over and over and over again. They’re so common they’re running jokes in the industry. These phrases are used by spammers, but a lot of non-spammers seem to use them as well.

    The most famous is probably “I’m sure they’ll unblock me if I can just explain my business model.” Trust me, the folks blocking your mail don’t want to hear about your business model. They just want you to stop doing whatever it is you’re doing. In fact, I’m one of the few people in the space who actually wants to hear about your business model – so I can help you reach your goals without doing things that get you blocked.

    A few months ago, after getting off yet another phone call where I talked clients down from explaining their business model to Spamhaus, I put together list of phrases that senders really shouldn’t use when talking to their ESP, a blocklist provider or an abuse desk. I posted it to a closed list and one of the participants put it together into a bingo card.


    A lot of these statements are valid marketing and business statements. But the folks responsible for blocking mail don’t really care. They just want their users to be happy with the mail they receive.

    The post Dealing with blocklists, deliverability and abuse people appeared first on Word to the Wise.

    2015-11-21T01:03:47+00:00 laura 2015-11-21T01:03:47+00:00 Word to the Wise How DANE Strengthens Security for TLS, S/SMIME and Other Applications

    The Domain Name System (DNS) offers ways to significantly strengthen the security of Internet applications via a new protocol called the DNS-based Authentication of Named Entities (DANE). One problem it helps to solve is how to easily find keys for end users and systems in a secure and scalable manner. It can also help to address well-known vulnerabilities in the public Certification Authority (CA) model. Applications today need to trust a large number of global CAs. There are no scoping or naming constraints for these CAs — each one can issue certificates for any server or client on the Internet, so the weakest CA can compromise the security of the whole system. As described later in this article, DANE can address this vulnerability.

    Dane is Built on DNSSEC

    DANE is built on the foundation provided by the DNS Security Extensions (DNSSEC). DNSSEC is a cryptographic system to verify the authenticity of data in the DNS. Domain owners digitally sign data in their DNS zones, and DNS resolvers authenticate these signatures as they lookup DNS records. This provides protection against well-known attacks, such as DNS cache poisoning and DNS spoofing.

    Validating with DNSSEC

    In effect, DNSSEC transforms the DNS into an authenticated directory of information associated with domain names, and as a result some natural follow-on benefits appear. DNSSEC can be used to securely store and retrieve cryptographic keying material, such as public keys, X.509 certificates, etc. in the DNS. These can in turn be used to significantly strengthen the security of Internet applications, and address a variety of vulnerabilities that exist in today's deployed systems.

    Security for TLS Using DANE

    The "TLSA" DNS record type defined in the DANE protocol describes how to associate Transport Layer Security (TLS) certificates with the domain names of servers. These can then be used to secure TLS applications, such as Web (HTTPS), email transport (Simple Mail Transport Protocol (SMTP) over TLS), instant messaging (XMPP over TLS) and many more.

    Security for TLS using DANE

    SMTP over TLS is one application where DANE is seeing growing production scale deployment on email servers with large numbers of users. The appearance of DANE for SMTP transport security is particularly timely. SMTP over TLS has traditionally been used in an opportunistic manner — it is used only if both sides of the SMTP connection support it. However, a man-in-the-middle attacker can easily subvert the security by stripping away the TLS capability indication and downgrade the connection to be unencrypted. With DANE, SMTP servers use the presence of a signed TLSA record in the DNS to (a) confirm the intent to secure the session with TLS, preventing downgrade attacks, and (b) authenticate the connection with DANE.

    Additional DANE record types are currently in development to accommodate more applications.

    Security for Email Using DANE

    The upcoming OPENPGPKEY and SMIMEA records will allow use of the DNS to store and retrieve PGP (Pretty Good Privacy) public keys and S/MIME certificates for end users. PGP and S/MIME are commonly used for secure end-to-end messaging (i.e. encryption and digital signing). DANE provides a new way to authenticate these keys and certificates in addition to or in place of the current ways that users do this. In addition the DNS provides an always available, globally distributed mechanism to find these keys, solving a crucial problem of easily locating keys for inter-organizational email. The end-to-end messaging scenario is discussed in detail in a recent Verisign blog post.

    Security for email using DANE

    Emerging projects, such as the US National Cybersecurity Center of Excellence's (NCCoE) Secure Email initiative, are already exploring ways to use such mechanisms. With the advent of DNSSEC and DANE, it is now possible to deploy inter-organizational secure email in a truly scalable and manageable way.

    More Security Use Cases For DANE

    The proposed Payment Association (PMTA) record associates payment information (such as account numbers, Bitcoin wallets and other forms of electronic currency) with easier to use domain names typically corresponding to users. Companies like Armory and Netki are already integrating DANE PMTA support in their Bitcoin wallet implementations.

    There is a proposal to enhance the TLSA record to allow the use of TLS client certificates. This fills a gap in the current specification, which only works with TLS server certificates. With this enhancement, many applications that employ client certificates will be able to use DANE to authenticate them. In particular, some design patterns from the Internet of things are already planning to use this mechanism, where large networks of physical objects identified by domain names may authenticate themselves using TLS to centralized device management and control platforms.

    Another proposal in progress involves a DANE and DNSSEC authentication chain extension for the TLS protocol. This mechanism allows a TLS server, when prompted by a compatible client, to deliver the TLSA record corresponding to its server certificate along with the complete chain of DNSSEC records needed to authenticate it. The TLS client gains a performance advantage by not needing to do all these DNS queries itself. It can also help in situations where the client finds itself behind a middlebox that impedes its ability to successfully issue DANE- and DNSSEC-enabled queries. These things are important preconditions for applications like Web browsers and Web servers to adopt DANE.

    What is takes for DANE to work

    In short, DANE provides the ability to use DNSSEC to perform the critically important function of secure key learning and verification. It can use the DNS directly to distribute and authenticate certificates and keys for endpoints. It can also work in conjunction with today's public CA system by applying additional constraints about which CAs are authorized to issue certificates for specific services or users — thereby significantly reducing risks in the currently deployed CA system. A recent paper from Verisign Labs explores this topic in more detail.

    For more information, visit the Verisign Labs page on DANE.

    Written by Shumon Huque, Principal Research Scientist at Verisign Labs

    Follow CircleID on Twitter

    More under: DNS, DNS Security, Security

    2015-11-20T20:39:00+00:00 Shumon Huque 2015-11-20T20:39:00+00:00 CircleID Red Hat Enterprise Linux 7.2 – A major desktop milestone

    So many of you have probably seen that RHEL 7.2 is out today. There are many important updates in this release, some of them detailed in the official RHEL 7.2 press release.

    One thing however which you would only discover if you start digging into the 7.2 update is that its the first time in RHEL history that we are doing a full scale desktop update in a point release. We shipped RHEL 7.0 with GNOME 3.8 and in RHEL 7.2 we are updating it to GNOME 3.14. This brings in a lot of new major features into RHEL, like the work we did on improved HiDPI support, improved touch and gesture support, it brings GNOME Software to RHEL, the improved system status area and so on. We plan on updating the desktop further in later RHEL 7.x point releases.

    This change of policy is of course important to the many RHEL Workstation customers we have, but I also hope it will make RHEL Workstation and also the CentOS Workstation more attractive options to those in the community who have been looking for a LTS version of Fedora. This policy change gives you the rock solid foundation of RHEL and the RHEL kernel and combines it with a very well tested yet fairly new desktop release. So if you feel Fedora is moving to quickly, yet have felt that RHEL on the other hand has been moving to slowly, we hope that with this change to RHEL we have found a sweet compromise.

    We will of course also keep doing regular applications updates in RHEL 7.x, just like we started with in RHEL 6.x. Giving you up to date versions of things like LibreOffice, Firefox, Thunderbird and more.

    2015-11-19T21:35:12+00:00 uraeus 2015-11-19T21:35:12+00:00 Fedora People YouTube Backs Its Users With New Fair Use Protection Program

    In what we very much hope launches a “race to the top” to protect online fair use, today YouTube announced a new program to help users fight back against outrageous copyright threats. The company has created a ‘Fair Use Protection’ program that will cover legal costs of users who, in the company’s view, have been unfairly targeted for takedown.

    We have criticized YouTube in the past for not doing enough to protect fair use on its service, including silencing videos based on vague “contractual obligations” and failing to fix the many problems with its Content ID program. However, when the company takes positive steps to protect its users, we take notice.

    Google describes the program on its blog, but here are the basic details: When the company notices that a video targeted for takedown is clearly a lawful fair use, it may choose to offer the user the option of enrolling their video into the program. If the user decides to join, the video will stay up in the United States and, if the rightsholder sues, YouTube will provide assistance of up to $1 million dollars in legal fees.

    YouTube has started the program off with four videos that the company believes represent fair use. You can watch them here.

    While we would like the program to do a little bit more—for example, given that the main criteria is that a video must be clearly lawful we’d like YouTube to provide any user that meet that criteria the option of enrolling their video into the program, rather than hand-selecting which ones gets to participate—we think this is a solid and unprecedented step forward in protecting fair use on the site.

    We commend YouTube for standing up for its users, and we hope the program will inspire other service providers on the web to follow its lead.

    Related Issues: 
    Related Cases: 

    Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora  ||  Join EFF
    2015-11-19T19:24:38+00:00 Amul Kalia 2015-11-19T19:24:38+00:00 Deeplinks Microsoftbellers gebruiken uw nummer, telefoon roodgloeiend

    Steeds vaker gebruiken zogeheten Microsoftbellers Nederlandse telefoonnummers om zich achter te verschuilen. Opvallend is dat het hier geregeld gaat om 024-nummers uit de regio Nijmegen. Als toevallig uw nummer wordt gekozen, zet u dan maar schrap.

    Het overkwam een man uit Nijmegen die in drie weken tijd minstens veertig telefoontjes kreeg. Hij werd gebeld door mensen die vertelden dat ze een gemiste oproep van hem hadden. Een vrouw die belde gaf aan dat ze door een zogenaamde medewerker van Microsoft was gebeld. Ze vertrouwde het niet, hing op, belde het 024-nummer terug en kreeg toen de man uit Nijmegen aan de lijn.

    In 2015 hebben al meer dan 1300 personen zich bij de Fraudehelpdesk gemeld omdat ze zouden zijn gebeld door een medewerker van Microsoft. In minstens 37 gevallen was hier een 024-nummer bij betrokken, waarvan 35 in de laatste maand. Dat de mensen ook daadwerkelijk door een persoon uit de regio Nijmegen zijn gebeld is niet waarschijnlijk. Het is een koud kunstje om telefoontjes te plegen met een willekeurig ander nummer als dekmantel, bijvoorbeeld met speciale software. Het telefoonnummer dat mensen in hun scherm zien is dan vaak niet het nummer waar de persoon vandaan belt.

    De zogeheten Microsoftbellers doen zich voor als medewerkers van de softwaregigant. Ze komen behulpzaam over, maar zijn uit op uw geld of persoonlijke gegevens. Vaak spreken de bellers Engels met een Indiaas of Pakistaans accent.

    Bron foto: Gabriel/, CC BY 2.0.

    The post Microsoftbellers gebruiken uw nummer, telefoon roodgloeiend appeared first on Fraudehelpdesk.

    2015-11-18T14:14:54+00:00 Tjerk Notten 2015-11-18T14:14:54+00:00 Fraudehelpdesk - Algemeen DOD breaks links in .mil clients

    DataSecurity_IllustrationThe Department of Defense is breaking HTML links in mail to .mil domains. This is part of the DoD’s attempt to curtail phishing.

    a great majority of intrusions into Pentagon networks are the result of the kind of human error that is exploited in phishing attacks, in which seemingly trustworthy e-mail links are used as attack vectors to hijack user computers, install malware or steal credentials.

    Instead of being able to click on links, .mil recipients will have to cut and paste links into a browser in order to visit the website. This will also affect open tracking and break images in emails.

    If you’re sending to .mil domains, plain text is going to be best. The DoD has had a policy of not rendering HTML, but some mail clients still did. Now the DoD is taking extra steps to break links.

    My suggestions for senders who need to send mail to .mil domains:

    1. Use plain text.
    2. Make links as short as possible so that they’re easier to cut and paste.
    3. Call to actions are even more important as you’re asking for an extra step.
    4. For those of you who can, try and get an address that’s not .mil

    For mailers who might sometimes get .mil addresses on your lists, think about whether or not you really want to allow them. Try to get a different address for them. Deliverability will be easier and your pretty HTML can be displayed.


    The post DOD breaks links in .mil clients appeared first on Word to the Wise.

    2015-11-17T22:59:38+00:00 laura 2015-11-17T22:59:38+00:00 Word to the Wise Btrfs RAID 6 on dm-crypt on Fedora 23

    I’m building a NAS and given the spare drives I have at the moment, thought I’d have a play with Btrfs. Apparently RAID 6 is relatively safe now, so why not put it through its paces? As Btrfs doesn’t support encryption, I will need to build it on top of dm-crypt.

    Boot drive:

    • /dev/sda

    Data drives:

    • /dev/sdb
    • /dev/sdc
    • /dev/sdd
    • /dev/sde
    • /dev/sdf

    I installed Fedora 23 Server onto /dev/sda and just went from there, opening a shell.
    # Setup dm-crypt on each data drive
    # and populate the crypttab file.
    for x in b c d e f ; do
      cryptsetup luksFormat /dev/sd${x}
      UUID="$(cryptsetup luksUUID /dev/sd${x})"
      echo "luks-${UUID} UUID=${UUID} none" >> /etc/crypttab
    # Rebuild the initial ramdisk with crypt support
    echo "dracutmodules+=crypt" >> /etc/dracut.conf.d/crypt.conf
    dracut -fv
    # Verify that it now has my crypttab
    lsinitrd /boot/initramfs-$(uname -r).img |grep crypttab
    # Reboot and verify initramfs prompts to unlock the devices
    # After boot, verify devices exist
    ls -l /dev/mapper/luks*

    OK, so now I have a bunch of encrypted disks, it’s time to put btrfs into action (note the label, btrfs_data):
    # Get LUKS UUIDs and create btrfs raid filesystem
    for x in b c d e f ; do
      DEVICES="${DEVICES} $(cryptsetup luksUUID /dev/sd${x}\
        |sed 's|^|/dev/mapper/luks-|g')"
    mkfs.btrfs -L btrfs_data -m raid6 -d raid6 ${DEVICES}'

    See all our current btrfs volumes:
    btrfs fi show

    Get the UUID of the filesystem so that we can create an entry in fstab, using the label we created before:
    UUID=$(btrfs fi show btrfs_data |awk '{print $4}')
    echo "UUID=${UUID} /mnt/btrfs_data btrfs noatime,subvolid=0 0 0"\
      >> /etc/fstab

    Now, let’s create the mountpoint and mount the device:
    mkdir /mnt/btrfs_data
    mount -a

    Check data usage:
    btrfs filesystem df /mnt/btrfs_data/

    This has mounted the root of the filesystem to /mnt/btrfs_data, however we can also create subvolumes. Let’s create one called “share” for shared network data:
    btrfs subvolume create /mnt/btrfs_data/share

    You can mount this specific volume directly, let’s add it to fstab:
    echo "UUID=${UUID} /mnt/btrfs_share btrfs noatime,subvol=share 0 0"\
      >> /etc/fstab
    mkdir /mnt/btrfs_share
    mount -a

    You can list and delete subvolumes:
    btrfs subvolume list -p /mnt/btrfs_data/
    btrfs subvolume delete /mnt/btrfs_data/share

    Now I plugged in a few backup drives and started rsyncing a few TB across to the device. It seemed to work well!

    There are lots of other things you can play with, like snapshots, compression, defragment, scrub (use checksums to repair corrupt data), rebalance (re-allocates blocks across devices) etc. You can even convert existing file systems with btrfs-convert command, and use rebalance to change the RAID level. Neat!

    Then I thought I’d try the rebalance command just to see how that works with a RAID device. Given it’s a large device, I kicked it off and went to do something else. I returned to an unwakeable machine… hard-resetting, journalctl -b -1 told me this sad story:

    Nov 14 06:03:42 localhost.localdomain kernel: ------------[ cut here ]------------
    Nov 14 06:03:42 localhost.localdomain kernel: kernel BUG at fs/btrfs/extent-tree.c:1833!
    Nov 14 06:03:42 localhost.localdomain kernel: invalid opcode: 0000 [#1] SMP
    Nov 14 06:03:42 localhost.localdomain kernel: Modules linked in: fuse joydev synaptics_usb uas usb_storage rfcomm cmac nf_conntrack_netbios_ns nf_conntrack_broadcast ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ebtable_nat ebtab
    Nov 14 06:03:42 localhost.localdomain kernel: snd_soc_core snd_hda_codec rfkill snd_compress snd_hda_core snd_pcm_dmaengine ac97_bus snd_hwdep snd_seq snd_seq_device snd_pcm mei_me dw_dmac i2c_designware_platform snd_timer snd_soc_sst_a
    Nov 14 06:03:42 localhost.localdomain kernel: CPU: 0 PID: 6274 Comm: btrfs Not tainted 4.2.5-300.fc23.x86_64 #1
    Nov 14 06:03:42 localhost.localdomain kernel: Hardware name: Gigabyte Technology Co., Ltd. Z97N-WIFI/Z97N-WIFI, BIOS F5 12/08/2014
    Nov 14 06:03:42 localhost.localdomain kernel: task: ffff88006fd69d80 ti: ffff88000e344000 task.ti: ffff88000e344000
    Nov 14 06:03:42 localhost.localdomain kernel: RIP: 0010:[<ffffffffa0932af7>] [<ffffffffa0932af7>] insert_inline_extent_backref+0xe7/0xf0 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: RSP: 0018:ffff88000e3476a8 EFLAGS: 00010293
    Nov 14 06:03:42 localhost.localdomain kernel: RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000000
    Nov 14 06:03:42 localhost.localdomain kernel: RDX: ffff880000000000 RSI: 0000000000000001 RDI: 0000000000000000
    Nov 14 06:03:42 localhost.localdomain kernel: RBP: ffff88000e347728 R08: 0000000000004000 R09: ffff88000e3475a0
    Nov 14 06:03:42 localhost.localdomain kernel: R10: 0000000000000000 R11: 0000000000000002 R12: ffff88021522f000
    Nov 14 06:03:42 localhost.localdomain kernel: R13: ffff88013f868480 R14: 0000000000000000 R15: 0000000000000000
    Nov 14 06:03:42 localhost.localdomain kernel: FS: 00007f66268a08c0(0000) GS:ffff88021fa00000(0000) knlGS:0000000000000000
    Nov 14 06:03:42 localhost.localdomain kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Nov 14 06:03:42 localhost.localdomain kernel: CR2: 000055a79c7e6fd0 CR3: 00000000576ce000 CR4: 00000000001406f0
    Nov 14 06:03:42 localhost.localdomain kernel: Stack:
    Nov 14 06:03:42 localhost.localdomain kernel: 0000000000000000 0000000000000005 0000000000000001 0000000000000000
    Nov 14 06:03:42 localhost.localdomain kernel: 0000000000000001 ffffffff81200176 0000000000270026 ffffffffa0925d4a
    Nov 14 06:03:42 localhost.localdomain kernel: 0000000000002158 00000000a7c0ba4c ffff88021522d800 0000000000000000
    Nov 14 06:03:42 localhost.localdomain kernel: Call Trace:
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffff81200176>] ? kmem_cache_alloc+0x1d6/0x210
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffffa0925d4a>] ? btrfs_alloc_path+0x1a/0x20 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffffa0932f99>] __btrfs_inc_extent_ref.isra.52+0xa9/0x270 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffffa09386b4>] __btrfs_run_delayed_refs+0xc84/0x1080 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffffa093b674>] btrfs_run_delayed_refs.part.73+0x74/0x270 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffffa0925ecb>] ? btrfs_release_path+0x2b/0xa0 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffffa093b885>] btrfs_run_delayed_refs+0x15/0x20 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffffa094ff26>] btrfs_commit_transaction+0x56/0xad0 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffffa09a43be>] prepare_to_merge+0x1fe/0x210 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffffa09a4e5e>] relocate_block_group+0x25e/0x6b0 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffffa09a547a>] btrfs_relocate_block_group+0x1ca/0x2c0 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffffa0978b6e>] btrfs_relocate_chunk.isra.39+0x3e/0xb0 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffffa097a494>] btrfs_balance+0x9c4/0xf80 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffffa0986d54>] btrfs_ioctl_balance+0x3c4/0x3d0 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffffa0988501>] btrfs_ioctl+0x541/0x2750 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffff811b341c>] ? lru_cache_add+0x1c/0x50
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffff811b3572>] ? lru_cache_add_active_or_unevictable+0x32/0xd0
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffff811d5ffa>] ? handle_mm_fault+0xc8a/0x17d0
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffff81223303>] ? cp_new_stat+0xb3/0x190
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffff812313b5>] do_vfs_ioctl+0x295/0x470
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffff8132944d>] ? selinux_file_ioctl+0x4d/0xc0
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffff81231609>] SyS_ioctl+0x79/0x90
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffff810656cf>] ? do_page_fault+0x2f/0x80
    Nov 14 06:03:42 localhost.localdomain kernel: [<ffffffff817791ee>] entry_SYSCALL_64_fastpath+0x12/0x71
    Nov 14 06:03:42 localhost.localdomain kernel: Code: 10 49 89 d9 48 8b 55 c0 4c 89 7c 24 10 4c 89 f1 4c 89 ee 4c 89 e7 89 44 24 08 48 8b 45 20 48 89 04 24 e8 5d d5 ff ff 31 c0 eb ac <0f> 0b e8 92 b7 76 e0 66 90 0f 1f 44 00 00 55 48 89 e5
    Nov 14 06:03:42 localhost.localdomain kernel: RIP [<ffffffffa0932af7>] insert_inline_extent_backref+0xe7/0xf0 [btrfs]
    Nov 14 06:03:42 localhost.localdomain kernel: RSP <ffff88000e3476a8>
    Nov 14 06:03:42 localhost.localdomain kernel: ---[ end trace 63b75c57d2feac56 ]---


    Looks like rebalance has a major bug at the moment. I did a search and others have the same problem, looks like I’m hitting this bug. I’ve reported it on Fedora Bugzilla.

    Anyway, so I won’t do a rebalance at the moment, but other than that, btrfs seems pretty neat. I will make sure I keep my backups up-to-date though, just in case…

    2015-11-15T09:56:19+00:00 Chris 2015-11-15T09:56:19+00:00 Fedora People Black Hat Europe 2015 Wrap-Up

    [The post Black Hat Europe 2015 Wrap-Up has been first published on /dev/random]

    BHEU2015Here is my quick wrap-up of Black Hat Europe 2015 which just terminated today. Due to a high workload, I joined Amsterdam only today to attend the second day of briefings and… I’m not disappointed! As usual, there was very interesting sessions and other less attractive. I also missed a very nice one based on friends’ feedback. That’s always the same issue with multi-tracks events. After an early drive in a rainy morning to Amsterdam, the registration completed and some caffeine, it was already time for the first round of talks.

    The first one started with Nikhil Mittal who talked about CI tools: “Continuous Integration: Why CI tools are an attacker’s best friends”. Continuous integration is a set of software engineering practices that speed the delivery of software by decreasing integration times (thank you  Wikipedia). Nikhil’s first slide gave the idea of the talk: “Continuous Intrusion”:

    Nikhil on stage

    As usual, the issues are linked to dangerous features and misconfigurations. A good quote to always keep in mind:

    A single improperly configured tool can ruin you security.

    Nikhil reviewed three popular tools: Jenkins, TeamCity, Go. They are used to integrate code from multiple developers via code repository, build servers and master servers to finally deploy the apps to slave computers. From an attacker point of view, this is a nice target to pivot, increase privileges, etc. Usually, compromizing a CI tool means domain admin in most cases.

    The first one to be reviewed was Jenkins. It is the most popular tool. What are its issues:

    • No authentication by default
    • No protection against brute-force attacks
    • No password policy for users
    • Runs with SYSTEM (or high level) privileges on Windows
    • All users could access output of builds (read privileges to anonymous) (release < 1.580)

    How to abuse it? It’s so simple:

    • Add a build step -> if allowed/configured it’s possible to execute commands on remote hosts by regular users
    • It is possible to retrieve credentials in clear text

    To search for Jenkins facing the Internet, you can use the following Google dork: intitle:”Dashboard [Jenkins]”. Nikhil performed two live demos: In the first one, he executed basic Powershell functions. In the second one, he demonstrated how to get a reverse shell using his powercat.ps1 tool.

    The next one was TeamCity. The issues are:

    • Registration of new users allowed by default
    • No password policy for users
    • Runs with SYSTEM (or high privileges)

    If you’re project admin, you can upgrade your privileges to superuser by stealing the superuser token from the master. The associated Google dork is: intitle:”Project – TeamCity”. Here again, the vulnerabilities were demonstrated live.

    For the last one, Go, guess what? Same story with almost the same vulnerabilities. Liked the one which discloses Github credentials in clear text in the console. To complete his presentation, Nikhil disclosed a vulnerability for Jenkins (pre-auth RCE). It was a very nice talk, straight to the point with clear facts that can be easily reused by pentesters.

    After a first coffee break, I followed Marco Balduzzi & Vincenzo Ciancaglini who spoke about “Cybercrime in the deep web”. I was expected a presentation with many statistics about the deep web but it was not the case. They started a project three years ago: to index the deep web. Their presentation was split in two parts. In the first part, they presented the tool used to crawl, index, store and enrich pages from the deep web. The tool is called “DeWA” – “Deep Web Analyzer“. It’s always important to know what exactly the deep web. It’s a buzz-word used by many media. It can be defined as “every content not indexed by search engines“. We have:

    • The dark net : private overlay networks
    • The dark web: websites hosted on dark nets

    There is a difference between what’s hidden and what’s really interesting for criminals. They are searching for pages hidden via the following technologies:

    • Tor (.onion)
    • I2P (.i2p)
    • Freenet
    • Namecoints, Emercoins (alternate DNS systems – blockchain based DNS).
    • Rogue TLD’s & private DNS (OpenNIC, Cesidian Root,

    The data sources are: user data, pastebin sites, twitter, reddit, URL listing sites, TOR gateways, I2P host files, Scouting feedback. Since November 2013, they collected:

    • 40.5M events
    • 611K urls
    • 20.500K domains

    So, what did they found for criminals?

    • Guns & Ammo
    • Drugs
    • Passports and fake id
    • Counterfeited money
    • Credit cards
    • Doxing (stars, politicians, etc)
    • Assassins (note the extended suffering option!)
    • Crowdfunding evil (when people will die)

    Passports on the deepweb

    The second part of the web was a review of some wellknown malware which uses the deep web for their operations:

    • Skynet
    • Dare
    • Vawtrack
    • TorrentLocker

    A nice research! The next talk in my schedule: “VoIP wars: Destroying Jar Jar Lync” by Fatih Ozavci. The abstract of the presentation was juicy and promised a nice talk. I started with an awesome intro like the Star Wars movies (credits of the picture: @PeteAitch)

    Star Wars intro

    The presentation was a first stage of a new research: Skype for Business. This is the new name of the Lync tool that is more and more used in corporate environments as unified messaging solution. The presentation covered several vulnerabilities assigned to the Microsoft product (released yesterday):

    • CVE-2015-6061
    • CVE-2015-6062
    • CVE-2015-6063

    Fatih reviewed the product, its components and how the default security is defined. By default lot of security features are enforced:

    • SIP over TLS is enforced for clients by default
    • SRTP uses AES
    • SIP replay attack protections
    • Clients validate server response signatures
    • SIP trunks (PSTN gw) security: TLS enabled & IP restricted, no authentication support

    To perform the demo, Fatih used his tool called Viproy. The latest version is now a standalone Metasploit module and it supports TLS interception with TLS certs. Some nice demo (video) where proposed. Basically, XML message can be used to offer URLs to clients and to make them open it in a browser. Another one was nice: sending a fake link to client asking to download a new Skype update which is a reverse shell. Finally, the last demo was abusing multiple clients at the same time via BeEF and the browser autopwn module. Very interesting but not so evident to realize in a corporate environment!

    After the lunch, lot of people moved to the “Forum” (the biggest room) to attend a presentation about self-driving cars by Jonathan Petit: “Self-driving and connected cars: Fooling sensors and tracking drivers“. Such cars are equipped with multiple sensors (GPS, LIDAR, cameras, wheel encoder, ultra-sonic sensors, …).

    Jonathan focused in a first part on the camera (model: MobilEye C2-270) which provides lane detection, rear collision alert and pedestrian alert. It was a “blinding” attack. Jonathan explained how they tested the camera to make it blind and how long it takes to recover. The second sensor tested was the LIDAR (model: IBEO LUX 3) which provides objects detection and object tracking. Here again, he demonstrated how to abuse the LIDAR. A first conclusion to the talk is clearly: “Do not trust sensors!“.

    Then, Jonathan explained the purpose of the 802.11p protocols which allows cars to communicate between each others. Basically, they broadcast constantly beacons which contain a lot of useful information.

    Jonathan on stage

    The problem is that beacons are broadcasted in clear text and can be collected by any (rogue) sensor.  A beacon sniffer was built and deployed at sensitive places on a campus to track cars. It was demonstrated that we can easily built a surveillance system based on the cars’ beacons.

    The last talk was about “a new tool to discovering Flash 0-day attacks in the wild” by Peter Pi. As an introduction, he explained that 2015 is(was) the Flash year! Many 0-day attacks hit the Flash player. There were two questions to solve to achieve the goal?

    • How to get infected samples in the wild?
    • How to identify those 0-day from the collected samples?

    Peter on stage

    What are the source channels to find interesting content?

    • Products’ feedback (large number of samples – very effective)
    • URL crawling
    • VT intelligence
    • URL patterns

    Peter presented his tool called AFED – “Advanced Flash Exploit Detector“. Nothing special… In parallel to this talk, there was another one which was really impressive (based on a friend’s feedback: Bypassing local Windows authentication to defeat full disk encryption.

    Security Panel

    The day ended with a panel session with Jeff Moss, Marion Marschalek, Haroon Meer and Jennifer Savage. An interesting discussion about the current security landscape. Dates and location of the next edition are already known: November, 1-4 in London!



    [The post Black Hat Europe 2015 Wrap-Up has been first published on /dev/random]

    2015-11-13T20:39:34+00:00 Xavier 2015-11-13T20:39:34+00:00 /dev/random Upgrading to PHP 7

    Yesterday, O’Reilly published my report on Upgrading to PHP 7. This 80-page mini-eBook is available free (and DRM free) in ePub, Mobi, and PDF formats.

    Grab it today or read more details here.

    Upgrading to PHP 7

    2015-11-11T16:00:00+00:00 Davey Shafik 2015-11-11T16:00:00+00:00 Planet PHP CoreOS Introduces Clair: Open Source Vulnerability Analysis for your Containers

    Today we are open sourcing a new project called Clair, a tool to monitor the security of your containers. Clair is an API-driven analysis engine that inspects containers layer-by-layer for known security flaws. Using Clair, you can easily build services that provide continuous monitoring for container vulnerabilities. CoreOS believes tools that improve the security of the world's infrastructure should be available for all users and vendors, so we made the project open source. With that same purpose, we welcome your feedback and contributions to the Clair project.

    Clair is the foundation of the beta version of Quay Security Scanning, a new feature running now on Quay to examine the millions of containers stored there for security vulnerabilities. Quay users can log in today to see Security Scanning information in their dashboard, including a list of potentially vulnerable containers in their repositories. The Quay Security Scanning beta announcement has more details for Quay users.

    Why Create Clair: For Improved Security

    Vulnerabilities will always exist in the world of software. Good security practice means being prepared for the mishaps – to identify insecure packages and be prepared to update them quickly. Clair is designed to help you identify insecure packages that may exist in your containers.

    Understanding how systems are vulnerable is a laborious task, especially when dealing with heterogenous and dynamic setups. The goal is to empower any developer to gain intelligence about their container infrastructure. Even more, teams are empowered to seek action and apply a fix to vulnerabilities as they arise.

    How Clair Works

    Clair scans each container layer and provides a notification of vulnerabilities that may be a threat, based on the Common Vulnerabilities and Exposures database (CVE) and similar databases from Red Hat, Ubuntu, and Debian. Since layers can be shared between many containers, introspection is vital to build an inventory of packages and match that against known CVEs.

    Automatic detection of vulnerabilities will help increase awareness and best security practices across developer and operations teams, and encourage action to patch and address the vulnerabilities. When new vulnerabilities are announced, Clair knows right away, without rescanning, which existing layers are vulnerable and notifications are sent.

    For example, CVE-2014-0160, aka "Heartbleed" has been known for over 18 months, yet Quay Scanning found it is still a potential threat to 80 percent of the Docker images users have stored on Quay. Just like CoreOS Linux contains an auto-update tool which patched Heartbleed at the OS layer, we hope this tool will improve the security of the container layer, and help make CoreOS the most secure place to run containers.

    Take note that vulnerabilities often rely on particular conditions in order to be exploited. For example, Heartbleed only matters as a threat if the vulnerable OpenSSL package is installed and being used. Clair isn’t suited for that level of analysis and teams should still undertake deeper analysis as required.

    Get Started

    To learn more, watch this talk presented by Joey Schorr and Quentin Machu about Clair. And, here are the slides from the talk.

    This is only the beginning and we expect more and more development. Contributions and support from the community is welcomed – try it out in Quay or enable it in your container environment and let us know what you think.

    The team behind Clair will be at DockerCon EU in Barcelona, November 16-17. Please stop by the Quay booth to learn more or see a demo of Clair or Quay Security Scanning.

    2015-11-13T00:00:00+00:00 2015-11-13T00:00:00+00:00 CoreOS Blog Thoughts on Two Years of Working from Home

    I've spent the past two years working from home as a network engineer for two different companies. At first, I wasn't sure how well the remote lifestyle would suit me, but after a short time I settled into a very comfortable routine. And to my surprise, I discovered that I was much more productive working from the serenity of my home office than I ever was in a cubicle. I'd like to share my observations with the hope of convincing others to try ditching the office as well.

    Why Work Remote?

    No More Commute

    This is the most obvious benefit to working remote. No more sitting in rush hour traffic twice a day. Even if you take public transit and are able to play on your laptop for most of the trip, commuting is a major time sink. Most people will instantly gain back at least an hour of time by foregoing the daily drive to and from the office. What could you do with an extra hour each day?

    And beyond time, there are ample corollary benefits. You (or your company) are no longer paying for as much fuel or fare. You're greatly reducing your risk of being injured in a traffic accident, simply by reducing exposure. You're reducing your carbon footprint. And you're one less car on the road or occupied seat on the train, which reduces the burden on public infrastructure that's already strained to the breaking point in many cities.

    Continue reading · 35 comments

    2015-11-11T03:44:21+00:00 Jeremy Stretch 2015-11-11T03:44:21+00:00 Blog Gelukkiger worden? Keer Facebook de rug toe!

    Mensen die Facebook de rug toe keren, zijn minder gestrest, leven meer in het nu en zijn tevredener met hun leven. Dat suggereert een experiment, uitgevoerd door het Happiness Research Institute. De onderzoekers verzamelden iets meer dan 1000 proefpersonen en vroeg ze onder meer naar hoe tevreden ze waren met hun leven en hoe gestrest […]

    Lees het volledige bericht (Gelukkiger worden? Keer Facebook de rug toe!) op

    2015-11-11T12:27:51+00:00 Caroline Kraaijvanger 2015-11-11T12:27:51+00:00 Steekproef studenten wijst uit: gepersonaliseerde phishingmail bloedlink

    Hoeveel mensen klikken er nou precies op een goed vormgegeven phishingmail? En wat is het effect van een persoonlijke aanhef in zo’n bericht? Dat wilden studenten elektrotechniek en industrieel ontwerpen van de Universiteit Twente (UT) wel eens weten. Ze stuurden bijna 600 nepmails naar medewerkers van hun eigen faculteit, met een opvallend resultaat.

    In de mails stond (in het Engels) de tekst:

    ‘Door recente aanpassingen aan het UT computersysteem zijn een aantal problemen ontstaan tussen onze databaseservers. Deze servers, waarin de gebruikersnaam en wachtwoord zijn opgeslagen, zijn niet goed gesynchroniseerd.’

    Doel van de mail was om mensen naar een valse website te lokken en ze daar te laten inloggen met hun gebruikersnaam en wachtwoord. Er werden twee soorten mails gestuurd: de ene met een algemene aanhef (Beste medewerker) en de ander met een persoonlijke aanhef (Beste mevrouw Janssen).

    Gegevens ingevuld
    Van de medewerkers die de algemene e-mail ontvingen, ging 32 procent naar de website en vulde 19 procent persoonlijke gegevens in. Bij de gepersonaliseerde e-mails lag dat percentage hoger: 38 procent ging naar de website en 29 procent vulde gegevens in.

    ‘Phishing e-mails worden steeds beter, steeds meer gepersonaliseerd’, legt hoogleraar cybersecurity Marianne Junger van de vakgroep IEBIS uit op de website van de UT. Het vermoeden dat gepersonaliseerde valse e-mails nog effectiever zijn, blijkt nu te kloppen. ‘Mensen zijn blijkbaar toch gevoelig voor de inhoud van de e-mail. Ze kregen de indruk dat ze snel moesten reageren.’

    Geen domme mensen
    Junger benadrukt dat het een verkeerde gedachte is dat slechts domme mensen in phishingmails trappen. ‘Mensen gaan ervan uit in hun contact met anderen dat diegene de waarheid spreekt. Dat heeft te maken met de truth bias, oftewel de waarheidsvertekening op het moment dat je iets hoort.’

    De ethische commissie van de betreffende faculteit en de HR afdeling van de universiteit hebben het onderzoek vooraf goedgekeurd. De gegevens die medewerkers hebben ingevoerd werden niet opgeslagen.

    Bron foto: Chang’r/, CC BY-ND 2.0.

    The post Steekproef studenten wijst uit: gepersonaliseerde phishingmail bloedlink appeared first on Fraudehelpdesk.

    2015-11-12T08:25:19+00:00 Tjerk Notten 2015-11-12T08:25:19+00:00 Fraudehelpdesk - Algemeen