Watch a Guy Install Every Version of Windows and Draw Dicks in All of Them

Image: YouTube / Gizmodo

Just for fun, a random YouTuber upgraded a single computer from Windows 1.0 to Windows 10—including every version in between. Seeing the whole process unfold before your eyes is nostalgic as hell. Watching a guy draw dicks in every version of Windows is a little weird, though.

Not only does this installation enthusiast draw dicks in every version of Windows, he also mixes up his methods. Obviously, there’s the obligatory early MS Paint dick drawing. But later in the version history, you’ll see dicks in PageMaker as well as Word and Excel.

Image: YouTube / TheRasteri

It’s like the Windows-based version of Jonah Hill’s strange dick-drawing habit in the movie Superbad.

All dick drawings aside, you’ll get a thrill out of seeing how hilariously awful the first version of Windows was. And it’s a blast to see games like SkiFree and PipeDream again. Remember Windows ME? I don’t.

[Digg]

via Gizmodo
Watch a Guy Install Every Version of Windows and Draw Dicks in All of Them

Millions of Records Leaked From Huge US Corporate Database

Millions of records from a commercial corporate database have been leaked. ZDNet reports: The database, about 52 gigabytes in size, contains just under 33.7 million unique email addresses and other contact information from employees of thousands of companies, representing a large portion of the US corporate population. Dun & Bradstreet, a business services giant, confirmed that it owns the database, which it acquired as part of a 2015 deal to buy NetProspex for $125 million. The purchased database contains dozens of fields, some including personal information such as names, job titles and functions, work email addresses, and phone numbers. Other information includes more generic corporate and publicly sourced data, such as believed office location, the number of employees in the business unit, and other descriptions of the kind of industry the company falls into, such as advertising, legal, media and broadcasting, and telecoms.



Share on Google+

Read more of this story at Slashdot.

via Slashdot
Millions of Records Leaked From Huge US Corporate Database

Rare Nuclear Test Films Saved, Declassified, and Uploaded to YouTube

Explosion from a newly declassified nuclear explosion from 1958 as part of Operation Hardtack (YouTube)

From 1945 until 1962, the United States conducted 210 atmospheric nuclear tests—the kind with the big mushroom cloud and all that jazz. Above-ground nuke testing was banned in 1963, but there are thousands of films from those tests that have just been rotting in secret vaults around the country. But starting today you can see many of them on YouTube.

Lawrence Livermore National Laboratory (LLNL) weapon physicist Greg Spriggs has made it his mission to preserve these 7,000 known films, many of them literally decomposing while they’re still classified and hidden from the public.

According to LLNL, this 5-year project has been tremendously successful, with roughly 4,200 films already scanned and around 750 of those now declassified. Sixty-four of the declassified films have been uploaded today in what Spriggs is calling an “initial set.”

“You can smell vinegar when you open the cans, which is one of the byproducts of the decomposition process of these films,” Spriggs said in a statement to Gizmodo.

“We know that these films are on the brink of decomposing to the point where they’ll become useless,” said Spriggs. “The data that we’re collecting now must be preserved in a digital form because no matter how well you treat the films, no matter how well you preserve or store them, they will decompose. They’re made out of organic material, and organic material decomposes. So this is it. We got to this project just in time to save the data.”

It’s a race against time, and Spriggs figures it will take at least another two years to scan the remaining films. The declassification of all the remaining 3,480 films, a process that requires military review, will take even longer.

“It’s just unbelievable how much energy’s released,” said Spriggs. “We hope that we would never have to use a nuclear weapon ever again. I think that if we capture the history of this and show what the force of these weapons are and how much devastation they can wreak, then maybe people will be reluctant to use them.”

via Gizmodo
Rare Nuclear Test Films Saved, Declassified, and Uploaded to YouTube

500 Startups will keep investing in Latin America with new $10M fund

500 Startups is increasing its commitment to global investing with a new Latin America fund, targeting $10 million, going by the name of Luchadores II, the Spanish word for wrestlers. The fund is 500’s second aimed at the region and one of a growing number of its seed investment vehicles targeted at underserved markets across Europe, Asia and The Americas.

The accelerator has been investing in Latin America in one form or another since 2010. Santiago Zavala, managing partner of the new fund, is targeting approximately 120 companies for investment with the fresh powder in hopes of pushing the number of Latin American unicorns into the double digits.

Dave McClure, founding partner of 500 Startups, has long been bullish on the arbitrage opportunities made available through international investing. Deals in the United States, particularly in Silicon Valley, are often priced at a premium because of their competitiveness.

“We’re seeing ten to one leverage on additional capital Invested,” said McClure of some international bets. 500’s investments in Latin America have gone on to raise over $95 million in follow-on capital.

But the challenge of investing in Latin American startups is that they lack strong ecosystem support. Larger B, C and D rounds are hard to find in the region and local acquirers that anchor an entrepreneurial ecosystem are limited.

This is why the International Finance Corporation (IFC) is joining 500 as a limited partner in its new fund. The IFC has traditionally invested in later stage companies, but over the last two years it has been involving itself in seed stage funds as a limited partner.

“We’re trying to find a best of breed microfund managers in all developing markets,” said Nikunj Jinsi, global head of VC investments for the IFC.

McClure points to Accel Partners, Index Ventures, Sequoia Capital and Tiger Global as funds that are doing their part to create international pipelines for startups from inception to exit.

“Other funds are starting too late and expecting developed companies,” added McClure.

Some regions within Latin America have grown faster than others. Mexico City, where 500’s operations are located, has matured but other cities still lack strong mentor networks and other necessary resources.

500 Startups tries to maintain a strong relationship with its international affiliates through seed programs. The firm regularly sends partners to different geographies to mentor startups and offers foreign companies the opportunity to visit the Valley.

Though McClure wouldn’t commit to it, today’s Latin American fund announcement hints strongly of things to come in Asia. The firm recently rekindled its presence in China, though it has yet to announce a dedicated fund in the region.

via TechCrunch
500 Startups will keep investing in Latin America with new $10M fund

Corporate database leak exposes millions of contact details

A 52.2GB corporate database that has leaked online compromises the contact details over 33.7 million employees in the United States. The list includes government workers, most of whom are soldiers and other military personnel from the Department of Defense. According to ZDNet, the database came from business services firm Dun & Bradstreet, which sells it to marketers that send targeted email campaigns. Dun & Bradstreet denies suffering a security breach — the company says the leaked information matches the type and format it delivers to customers. It could have come from any of its thousands of clients.

Troy Hunt, who runs breach notification website Have I Been Pwned, was the one who discovered the leak. After analyzing its contents, he found that they’re composed of millions of people’s names, their corresponding work email addresses and phone numbers, as well as their companies and job titles. Since it’s a database sold to marketers, the leaked details all came from US-based companies and government agencies. Based on Hunt’s analysis, here are the top ten entities in the list, along with the number of affected employees:

1. Department of Defense: 101,013
2. United States Postal Service: 88,153
3. AT&T: 6,7382
4. Wal-Mart: 55,421
5. CVS: 40,739
6. The Ohio State University: 38,705
7. Citigroup: 35,292
8. Wells Fargo Bank, National Association: 34,928
9. Kaiser Foundation Hospitals : 34,805
10. International Business Machines (IBM) Corporation: 33,412

While the database doesn’t contain more sensitive information, such as credit card numbers or SSNs, Hunt says it’s an "absolute goldmine for [targeted] phishing."

He told ZDNet:

"From this data, you can piece together organizational structures and tailor messaging to create an air of authenticity and that’s something that’s attractive to crooks and nation-state actors alike."

Hunt has already uploaded the contents of the database on Have I Been Pwned, so you can check if your details have been compromised anytime.

Source: ZDNet, Troy Hunt

via Engadget
Corporate database leak exposes millions of contact details

Silverfin, a ‘connected accounting platform’, raises $4.5M Series A led by Index

Silverfin, a startup out of Ghent, Belgium (of all places) that offers a ‘connected accounting platform’ to help businesses stay on top of their financial data, has picked up $4.5 million in Series A funding.

Index Ventures led the round, with participation from existing investors, while the cash injection will be used to expand the team and build out the company’s international presence, starting with the U.K.

Founded in 2013, Silverfin’s platform plugs into popular accounting software and other financial data sources to help finance departments, accountancy firms and consultants, such as external tax specialists, get much better real-time visibility of a company’s financial data.

Or another way to describe it might be ‘Salesforce for financial data,’ since stakeholders can communicate via the platform, too.

The idea is to consolidate (or rely less on) a myriad of legacy and fragmented financial software tools, applications and, of course, Excel spreadsheets, and in turn reduce the tendency for error, including automatically flagging up anomalies. It’s also designed to make generating reports, such as those that are required quarterly or yearly, a lot less painful and updatable in real-time.

I’m told that 64,000 businesses already manage their finances on Silverfin, either directly or via an accountancy firm. The latter includes Deloitte, the well-known audit, consulting, tax and advisory firm.

“Ultimately we’re building a central nervous system for financial advisory and services firms, paving the way for us to become the first real-time monitor of businesses’ financial data,” says Silverfin co-founder Joris Van Der Gucht in a statement.

Adds Jan Hammer, partner at Index Ventures: “Fully automating data collection and reconciliation has been described as ‘the holy grail’ of accounting, because it will transform the financial advisory, accounting and auditing sectors. With Silverfin’s ability to integrate with existing software and provide a central data reconciliation platform, Tim, Joris and the team have a huge opportunity to become the gold standard solution for connected accounting”.

via TechCrunch
Silverfin, a ‘connected accounting platform’, raises $4.5M Series A led by Index

White Castle Opening New High Street Location Near OSU

White Castle Opening New High Street Location Near OSU



Walker Evans Walker Evans

White Castle Opening New High Street Location Near OSU

After the Old North Columbus White Castle location closed in 2010 and the Short North White Castle followed suit in 2016 (albeit temporarily), there’s been a shortage of places near The Ohio State University to seek out sliders. That will soon change, as White Castle has unveiled intentions to open a new location on High Street in the near future.

According to plans submitted to the University Area Review Board, White Castle is slated to open at 2106 North High Street in a former Radio Shack location. Zach Schiff — Partner at Schiff Properties, the owner of the commercial building — confirmed that the store would be a fairly traditional location for the hamburger chain, although submitted plans indicate that an upper seating level will provide 28 customers with second-floor seating that overlooks High Street.

Representatives from White Castle did not respond to inquiries as of the time of publishing, and no opening timeframe has been announced. The University Area Review Board will meet to review the White Castle submission on Thursday.

For more information, visit www.whitecastle.com.

white-castle

Tags:

via ColumbusUnderground.com
White Castle Opening New High Street Location Near OSU

MySQL in the Cloud – Online Migration from Amazon RDS to EC2 instance (part 1)

In our previous blog, we saw how easy it is to get started with RDS for MySQL. It is a convenient way to deploy and use MySQL, without worrying about operational overhead. The tradeoff though is reduced control, as users are entirely reliant on Amazon staff in case of poor performance or operational anomalies. No access to the data directory or physical backups makes it hard to move data out of RDS. This can be a major problem if your database outgrows RDS, and you decide to migrate to another platform. This two-part blog shows you how to do an online migration from RDS to your own MySQL server.

We’ll be using EC2 to run our own MySQL Server. It can be a first step for more complex migrations to your own private datacenters. EC2 gives you access to your data so xtrabackup can be used. EC2 also allows you to setup SSH tunnels and it removes requirement of setting up hardware VPN connections between your on-premises infrastructure and VPC.

Assumptions

Before we start, we need to make couple of assumptions – especially around security. First and foremost, we assume that RDS instance is not accessible from outside of AWS. We also assume that you have an application in EC2. This implies that either the RDS instance and the rest of your infrastructure shares a VPC or there is access configured between them, one way or the other. In short, we assume that you can create a new EC2 instance and it will have access (or it can be configured to have the access) to your MySQL RDS instance.

We have configured ClusterControl on the application host. We’ll use it to manage our EC2 MySQL instance.

Initial setup

In our case, the RDS instance shares the same VPC with our “application” (EC2 instance with IP 172.30.4.228) and host which will be a target for the migration process (EC2 instance with IP 172.30.4.238). As the application we are going to use tpcc-MySQL benchmark executed in the following way:

./tpcc_start -h rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com -d tpcc1000 -u tpcc -p tpccpass -w 20 -r 60 -l 600 -i 10 -c 4

Initial plan

We are going to perform a migration using the following steps:

  1. Setup our target environment using ClusterControl – install MySQL on 172.30.4.238
  2. Then, install ProxySQL, which we will use to manage our traffic at the time of failover
  3. Dump the data from the RDS instance
  4. Load the data into our target host
  5. Set up replication between RDS instance and target host
  6. Switchover traffic from RDS to target host
ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Prepare environment using ClusterControl

Assuming we have ClusterControl installed (if you don’t you can grab it from: http://ift.tt/2kRZ2IQ), we need to setup our target host. We will use the deployment wizard from ClusterControl for that:

Deploying a Database Cluster in ClusterControl

Deploying a Database Cluster in ClusterControl
Deploying a Database Cluster in ClusterControl

Deploying a Database Cluster in ClusterControl
Deploying a Database Cluster in ClusterControl

Deploying a Database Cluster in ClusterControl

Once this is done, you will see a new cluster (in this case, just your single server) in the cluster list:

Database Cluster in ClusterControl

Database Cluster in ClusterControl

Next step will be to install ProxySQL – starting from ClusterControl 1.4 you can do it easily from the UI. We covered this process in details in this blog post. When installing it, we picked our application host (172.30.4.228) as the host to install ProxySQL to. When installing, you also have to pick a host to route your traffic to. As we have only our “destination” host in the cluster, you can include it but then couple of changes are needed to redirect traffic to the RDS instance.

If you have chosen to include destination host (in our case it was 172.30.4.238) in the ProxySQL setup, you’ll see following entries in the mysql_servers table:

mysql> select * from mysql_servers\G
*************************** 1. row ***************************
       hostgroup_id: 20
           hostname: 172.30.4.238
               port: 3306
             status: ONLINE
             weight: 1
        compression: 0
    max_connections: 100
max_replication_lag: 10
            use_ssl: 0
     max_latency_ms: 0
            comment: read server
*************************** 2. row ***************************
       hostgroup_id: 10
           hostname: 172.30.4.238
               port: 3306
             status: ONLINE
             weight: 1
        compression: 0
    max_connections: 100
max_replication_lag: 10
            use_ssl: 0
     max_latency_ms: 0
            comment: read and write server
2 rows in set (0.00 sec)

ClusterControl configured ProxySQL to use hostgroups 10 and 20 to route writes and reads to the backend servers. We will have to remove the currently configured host from those hostgroups and add the RDS instance there. First, though, we have to ensure that ProxySQL’s monitor user can access the RDS instance.

mysql> SHOW VARIABLES LIKE 'mysql-monitor_username';
+------------------------+------------------+
| Variable_name          | Value            |
+------------------------+------------------+
| mysql-monitor_username | proxysql-monitor |
+------------------------+------------------+
1 row in set (0.00 sec)
mysql> SHOW VARIABLES LIKE 'mysql-monitor_password';
+------------------------+---------+
| Variable_name          | Value   |
+------------------------+---------+
| mysql-monitor_password | monpass |
+------------------------+---------+
1 row in set (0.00 sec)

We need to grant this user access to RDS. If we need it to track replication lag, the user would have to have then‘REPLICATION CLIENT’ privilege. In our case it is not needed as we don’t have slave RDS instance – ‘USAGE’ will be enough.

root@ip-172-30-4-228:~# mysql -ppassword -h rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 210
Server version: 5.7.16-log MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> CREATE USER 'proxysql-monitor'@172.30.4.228 IDENTIFIED BY 'monpass';
Query OK, 0 rows affected (0.06 sec)

Now it’s time to reconfigure ProxySQL. We are going to add the RDS instance to both writer (10) and reader (20) hostgroups. We will also remove 172.30.4.238 from those hostgroups – we’ll just edit them and add 100 to each hostgroup.

mysql> INSERT INTO mysql_servers (hostgroup_id, hostname, max_connections, max_replication_lag) VALUES (10, 'rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com', 100, 10);
Query OK, 1 row affected (0.00 sec)
mysql> INSERT INTO mysql_servers (hostgroup_id, hostname, max_connections, max_replication_lag) VALUES (20, 'rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com', 100, 10);
Query OK, 1 row affected (0.00 sec)
mysql> UPDATE mysql_servers SET hostgroup_id=110 WHERE hostname='172.30.4.238' AND hostgroup_id=10;
Query OK, 1 row affected (0.00 sec)
mysql> UPDATE mysql_servers SET hostgroup_id=120 WHERE hostname='172.30.4.238' AND hostgroup_id=20;
Query OK, 1 row affected (0.00 sec)
mysql> LOAD MYSQL SERVERS TO RUNTIME;
Query OK, 0 rows affected (0.01 sec)
mysql> SAVE MYSQL SERVERS TO DISK;
Query OK, 0 rows affected (0.07 sec)

Last step required before we can use ProxySQL to redirect our traffic is to add our application user to ProxySQL.

mysql> INSERT INTO mysql_users (username, password, active, default_hostgroup) VALUES ('tpcc', 'tpccpass', 1, 10);
Query OK, 1 row affected (0.00 sec)
mysql> LOAD MYSQL USERS TO RUNTIME; SAVE MYSQL USERS TO DISK; SAVE MYSQL USERS TO MEMORY;
Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.05 sec)

Query OK, 0 rows affected (0.00 sec)
mysql> SELECT username, password FROM mysql_users WHERE username='tpcc';
+----------+-------------------------------------------+
| username | password                                  |
+----------+-------------------------------------------+
| tpcc     | *8C446904FFE784865DF49B29DABEF3B2A6D232FC |
+----------+-------------------------------------------+
1 row in set (0.00 sec)

Quick note – we executed “SAVE MYSQL USERS TO MEMORY;” only to have password hashed not only in RUNTIME but also in working memory buffer. You can find more details about ProxySQL’s password hashing mechanism in their documentation.

We can now redirect our traffic to ProxySQL. How to do it depends on your setup, we just restarted tpcc and pointed it to ProxySQL.

Redirecting Traffic with ProxySQL

Redirecting Traffic with ProxySQL

At this point, we have built a target environment to which we will migrate. We also prepared ProxySQL and configured it for our application to use. We now have a good foundation for the next step, which is the actual data migration. In the next post, we will show you how to copy the data out of RDS into our own MySQL instance (running on EC2). We will also show you how to switch traffic to your own instance while applications continue to serve users, without downtime.

via Planet MySQL
MySQL in the Cloud – Online Migration from Amazon RDS to EC2 instance (part 1)

Mussels inspire glue that sticks despite water

Scientists have modeled a new adhesive that works underwater after shellfish that stick to surfaces. It’s stronger than many commercial glues created for the purpose.

“Our current adhesives are terrible at wet bonding, yet marine biology solved this problem eons ago,” says Jonathan Wilker, professor of chemistry and materials engineering at Purdue University.

“Mussels, barnacles, and oysters attach to rocks with apparent ease. In order to develop new materials able to bind within harsh environments, we made a biomimetic polymer that is modeled after the adhesive proteins of mussels.”

New findings, published in the journal ACS Applied Materials and Interfaces, show that the bio-based glue performed better than 10 commercial adhesives when used to bond polished aluminum. When compared with the five strongest commercial glues included in the study, the new adhesive performed better when bonding wood, Teflon, and polished aluminum. It was the only adhesive of those tested that worked with wood and far out-performed the other adhesives when used to join Teflon.

Mussel chemistry

Mussels extend hair-like fibers that attach to surfaces using plaques of adhesive. Proteins in the glue contain the amino acid DOPA, which harbors the chemistry needed to provide strength and adhesion. The researchers have now inserted this chemistry of mussel proteins into a biomimetic polymer called poly(catechol-styrene), creating an adhesive by harnessing the chemistry of compounds called catechols, which DOPA contains.

“We are focusing on catechols given that the animals use this type of chemistry so successfully,” Wilker says. “Poly(catechol-styrene) is looking to be, possibly, one of the strongest underwater adhesives found to date.”

Sandcastle worms teach us how to make underwater glue

While most adhesives interact with water instead of sticking to surfaces, the catechol groups may have a special talent for “drilling down” through surface waters in order to bind onto surfaces, he says. The researchers conducted a series of underwater bond tests in tanks of artificial seawater.

“These findings are helping to reveal which aspects of mussel adhesion are most important when managing attachment within their wet and salty environment,” Wilker says. “All that is needed for high strength bonding underwater appears to be a catechol-containing polymer.”

17X stronger

Surprisingly, the new adhesive also proved to be about 17 times stronger than the natural adhesive produced by mussels. “In biomimetics, where you try to make synthetic versions of natural materials and compounds, you almost never can achieve performance as good as the natural system,” Wilker says.

One explanation might be that the animals have evolved to produce adhesives that are only as strong as they need to be for their specific biological requirements. The natural glues might be designed to give way when the animals are hunted by predators, breaking off when pulled from a surface instead of causing injury to internal tissues.

“We have shown that this adhesive system works quite well within controlled laboratory conditions. In the future we want to move on to more practical applications in the real world,” Wilker says.

The Office of Naval Research funded the work.

Source: Purdue University

The post Mussels inspire glue that sticks despite water appeared first on Futurity.

via Futurity.org
Mussels inspire glue that sticks despite water