Back to basics: Isolation Levels In MySQL

In this blog, we will see the very basic thing “I” of “ACID” and an important property of Transaction ie., “ISOLATION”

The isolation defines the way in which the MySQL server (InnoDB) separates each transaction from other concurrent running transaction in the server and also ensures that the transactions are processed in a reliable way. If transactions are not isolated then one transaction could modify the data that another transaction is reading hence creating data inconsistency. Isolation levels determine how isolated the transactions are from each other.

MySQL supports all four the isolation levels that SQL-Standard defines.The four isolation levels are

  • READ UNCOMMITTED
  • READ COMMITTED
  • REPEATABLE READ
  • SERIALIZABLE

The Isolation level’s can be set globally or session based on our requirements.

 

Output.gif

 

Choosing the best isolation level based, have a great impact on the database, Each level of isolation comes with a trade-off, let’s discuss on each of them,

READ UNCOMMITTED:

In READ-UNCOMMITTED isolation level, there isn’t much isolation present between the transactions at all, ie ., No locks. A transaction can see changes to data made by other transactions that are not committed yet. This is the lowest level in isolation and highly performant since there is no overhead of maintaining locks, With this isolation level, there is always for getting a “Dirty-Read

That means transactions could be reading data that may not even exist eventually because the other transaction that was updating the data rolled-back the changes and didn’t commit. lest see the below image for better understanding

dirty reads.png

Suppose a transaction T1 modifies a row if a transaction T2 reads the row and sees the modification even though T1 has not committed it, that is a dirty read, the problem here is if T1 rolls back, T2 doesn’t know that and will be in a state of “totally perplexed”

READ COMMITTED:

IN READ-COMMITTED isolation level, the phenomenon of dirty read is avoided, because any uncommitted changes are not visible to any other transaction until the change is committed. This is the default isolation level with most of popular RDBMS software, but not with MySQL.

Within this isolation level, each SELECT uses its own snapshot of the committed data that was committed before the execution of the SELECT. Now because each SELECT has its own snapshot, here is the trade-off now, so the same SELECT, when running multiple times during the same transaction, could return different result sets. This phenomenon is called non-repeatable read.

Non_repeatable_read.png

A non-repeatable occurs when a transaction performs the same transaction twice but gets a different result set each time. Suppose T2 reads some of the rows and T1 then change a row and commit the change, now T2 reads the same row set and gets a different result ie.., the initial read is non-repeatable.

Read-committed is the recommended isolation level for Galera ( PXC, MariaDB Cluster ) and InnoDB clusters.

REPEATABLE READ:

In REPEATABLE-READ isolation level, the phenomenon of non-repeatable read is avoided. It is the default isolation in MySQL.This isolation level returns the same result set throughout the transaction execution for the same SELECT run any number of times during the progression of a transaction.

This is how it works, a snapshot of the SELECT is taken the first time the SELECT is run during the transaction and the same snapshot is used throughout the transaction when the same SELECT is executed. A transaction running in this isolation level does not take into account any changes to data made by other transactions, regardless of whether the changes have been committed or not. This ensures that reads are always consistent(repeatable). Maintaining a snapshot can cause extra overhead and impact some performance

Although this isolation level solves the problem of non-repeatable read, another possible problem that occurs is phantom reads.

A Phantom is a row that appears where it is not visible before. InnoDB and XtraDB solve the phantom read problem with multi-version concurrency control.

REPEATABLE READ is MySQL’s default transaction isolation level.

Phantom_read.png

SERIALIZABLE

SERIALIZABLE completely isolates the effect of one transaction from others. It is similar to REPEATABLE READ with the additional restriction that row selected by one transaction cannot be changed by another until the first transaction finishes. The phenomenon of phantom reads is avoided. This isolation level is the strongest possible isolation level. AWS Aurora do not support this isolation level.

 

Photo by Alberto Triano on Unsplash

via Planet MySQL
Back to basics: Isolation Levels In MySQL

Do You Use Inflatable Air Shims such as the Winbag?

Winbag Air Shim

Shown here is a Winbag inflatable air shim.

More specifically, it’s a durable inflatable air wedge that’s made from a fiber-reinforced non-marking material.

It’s as flat as 3/32″, allowing it to slide into narrow spaces, and can be inflated to up to 2″. It can lift up to 300 pounds, per Winbag.

The Winbag air shim is said to be useful for installing doors, windows, cabinets, appliances, and all kinds of other fixtures where you might need to make careful height or spacing adjustments.

Have you used one? What kinds of things have you used it for? Don’t have one? What kinds of applications might you use this for?

Price: ~$15-20 each, depending on store and quantity

Buy Now(via Amazon)
Buy Now(via Tool Nut)

I’ve heard lots of good things about the Winbag, and can see a few things I’d use it for (such as installing anti-vibrational pads to a washing machine that’s already installed and blocked in place).

This demo video, although a little infomercial-toned, shows what the Winbag can do:

There’s a competing brand, the Air Shim by Calculated Industries, which can sometimes be found for less money.

Air Shim

They also offer a larger version, the XL 500.

See Also(via Amazon)


via ToolGuyd
Do You Use Inflatable Air Shims such as the Winbag?

How To Create Comment Nesting In Laravel From Scratch

How To Create Comment Nesting In Laravel From Scratch is the today’s main topic. In any topic specific forum, there is always a structure, where you need to reply to someone’s comment and then somebody reply in their comment and so on. So comment nesting is very useful in any web application, which exposes public interest. In this tutorial, we will do it from scratch. We use Polymorphic relationship in this example.

Create Comment Nesting In Laravel From Scratch

As always, install Laravel using the following command. I am using Laravel Valet.

Step 1: Install and configure Laravel.

laravel new comments

# or

composer create-project laravel/laravel comments --prefer-dist

Go to the project.

cd comments

Open the project in your editor.

code .

Configure the MySQL database in the .env file.

Create an auth using the following command.

php artisan make:auth

Now migrate the database using the following command.

php artisan migrate

Step 2: Create a model and migration.

Create a Post model and migration using the following command.

php artisan make:model Post -m

Define the schema in the post migration file.

// create_posts_table

public function up()
{
    Schema::create('posts', function (Blueprint $table) {
        $table->increments('id');
        $table->string('title');
        $table->text('body');
        $table->timestamps();
    });
}

Also, we need to create Comment model and migration, so create by using the following command.

php artisan make:model Comment -m

Okay, now we will use the Polymorphic relationship between the models. So we need to define the schema that way.

// create_comments_table

public function up()
{
    Schema::create('comments', function (Blueprint $table) {
       $table->increments('id');
       $table->integer('user_id')->unsigned();
       $table->integer('parent_id')->unsigned();
       $table->text('body');
       $table->integer('commentable_id')->unsigned();
       $table->string('commentable_type');
       $table->timestamps();
    });
}

Now, migrate the database using the following cmd.

php artisan migrate

 

How To Create Comment Nesting In Laravel From Scratch

Step 3: Define Polymorphic Relationships.

Now, we need to define the Polymorphic relationships between the models. So write the following code inside app >> Post.php file. 

<?php

// Post.php 

namespace App;

use Illuminate\Database\Eloquent\Model;

class Post extends Model
{
    public function comments()
    {
        return $this->morphMany(Comment::class, 'commentable')->whereNull('parent_id');
    }
}

Here, we have written all the comments, whose parent_id is null. The reason is that we need to display the parent level comment and also save the parent level comment. That is why. We need to differentiate between the Comment and its replies.

Post also belongs To a User. So we can define that relationship as well.

<?php

// Post.php

namespace App;

use Illuminate\Database\Eloquent\Model;

class Post extends Model
{
    public function user()
    {
        return $this->belongsTo(User::class);
    }
    public function comments()
    {
        return $this->morphMany(Comment::class, 'commentable')->whereNull('parent_id');
    }
}

Define the Comment relationship with the Post. Write the following code inside Comment.php file.

<?php

// Comment.php

namespace App;

use Illuminate\Database\Eloquent\Model;

class Comment extends Model
{
    public function user()
    {
        return $this->belongsTo(User::class);
    }
}

Step 3: Define the views, controller, and routes.

Create a PostController.php file using the following command.

php artisan make:controller PostController

Next step is to define the route for the view and store the post in the database. Write the following code inside routes >> web.php file.

<?php

// web.php

Route::get('/', function () {
    return view('welcome');
});

Auth::routes();

Route::get('/home', 'HomeController@index')->name('home');

Route::get('/post/create', 'PostController@create')->name('post.create');
Route::post('/post/store', 'PostController@store')->name('post.store');

Write the following code inside PostController.php file.

<?php

// PostController.php

namespace App\Http\Controllers;

use Illuminate\Http\Request;

class PostController extends Controller
{

    public function __construct()
    {
        return $this->middleware('auth');
    }

    public function create()
    {
        return view('post');
    }

    public function store(Request $request)
    {
        // store code
    }
}

Now, first, we need to create a form for creating the post. So create a blade file inside resources >> views folder called post.blade.php. Write the following code inside a post.blade.php file.

@extends('layouts.app')

@section('content')
<div class="container">
    <div class="row justify-content-center">
        <div class="col-md-8">
            <div class="card">
                <div class="card-header">Create Post</div>
                <div class="card-body">
                    <form method="post" action="">
                        <div class="form-group">
                            @csrf
                            <label class="label">Post Title: </label>
                            <input type="text" name="title" class="form-control" required/>
                        </div>
                        <div class="form-group">
                            <label class="label">Post Body: </label>
                            <textarea name="body" rows="10" cols="30" class="form-control" required></textarea>
                        </div>
                        <div class="form-group">
                            <input type="submit" class="btn btn-success" />
                        </div>
                    </form>
                </div>
            </div>
        </div>
    </div>
</div>
@endsection

Okay, now go to the resources >> views >> layouts >> app.blade.php file and add a link to create a post.

We need to add the link to the @else part of the navigation bar. So, if the user is successfully logged in then and then he/she can create a post otherwise, he or she could not create a post.

@else
     <li class="nav-item">
          <a class="nav-link" href="">Create Post</a>
     </li>

Now, go to this link: http://comments.test/register and register a user. After logged in, you can see the Create Post in the navbar. Click that item, and you will redirect to this route: http://comments.test/post/create. You can see, our form is there with the title and body form fields.

Comment Nesting in Laravel 5.6

 

Step 4: Save and display the Post.

Okay, now we need to save the post in the database, so write the following code inside store function of PostController.php file.

<?php

// PostController.php

namespace App\Http\Controllers;
use App\Post;

use Illuminate\Http\Request;

class PostController extends Controller
{

    public function __construct()
    {
        return $this->middleware('auth');
    }

    public function create()
    {
        return view('post');
    }

    public function store(Request $request)
    {
        $post =  new Post;
        $post->title = $request->get('title');
        $post->body = $request->get('body');

        $post->save();

        return redirect('posts');

    }
}

After saving the post, we are redirecting to the posts list page. We need to define its route too. Add the following route inside a web.php file.

// web.php

Route::get('/posts', 'PostController@index')->name('posts');

Also, we need to define the index function inside PostController.php file.

// PostController.php

public function index()
{
    $posts = Post::all();

    return view('index', compact('posts'));
}

Create an index.blade.php file inside views folder. Write the following code inside an index.blade.php file.

@extends('layouts.app')

@section('content')
<div class="container">
    <div class="row justify-content-center">
        <div class="col-md-8">
            <table class="table table-striped">
                <thead>
                    <th>ID</th>
                    <th>Title</th>
                    <th>Action</th>
                </thead>
                <tbody>
                @foreach($posts as $post)
                <tr>
                    <td></td>
                    <td></td>
                    <td>
                        <a href="" class="btn btn-primary">Show Post</a>
                    </td>
                </tr>
                @endforeach
                </tbody>

            </table>
        </div>
    </div>
</div>
@endsection

Now, define the show route inside a web.php file. Add the following line of code inside a web.php file.

// web.php

Route::get('/post/show/{id}', 'PostController@show')->name('post.show');

Also, define the show() function inside PostController.php file.

// PostController.php

public function show($id)
{
    $post = Post::find($id);

    return view('show', compact('post'));
}

Create a show.blade.php file inside views folder and add the following code.

@extends('layouts.app')

@section('content')
<div class="container">
    <div class="row justify-content-center">
        <div class="col-md-8">
            <div class="card">
                <div class="card-body">
                    <p></p>
                    <p>
                        
                    </p>
                </div>
            </div>
        </div>
    </div>
</div>
@endsection

Okay, now you can see the individual posts. Fine till now.

Next step is to display the comments on this post.

Laravel Polymorphic morphMany relationship tutorial

 

Step 5: Create a form to add a comment.

First, create a CommentController.php file using the following command.

php artisan make:controller CommentController

Now, we need to create a form inside a show.blade.php file that can add the comment in the particular post.

Write the following code inside a show.blade.php file.

@extends('layouts.app')

@section('content')
<div class="container">
    <div class="row justify-content-center">
        <div class="col-md-8">
            <div class="card">
                <div class="card-body">
                    <p><b></b></p>
                    <p>
                        
                    </p>
                    <hr />
                    <h4>Add comment</h4>
                    <form method="post" action="">
                        @csrf
                        <div class="form-group">
                            <input type="text" name="comment_body" class="form-control" />
                            <input type="hidden" name="post_id" value="" />
                        </div>
                        <div class="form-group">
                            <input type="submit" class="btn btn-warning" value="Add Comment" />
                        </div>
                    </form>
                </div>
            </div>
        </div>
    </div>
</div>
@endsection

So, we have added a form that can add the comment. Now, we need to define the route to store the comment.

// web.php

Route::post('/comment/store', 'CommentController@store')->name('comment.add');

Okay, now write the store() function and save the comment using the morphMany() relationship.

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use App\Comment;
use App\Post;

class CommentController extends Controller
{
    public function store(Request $request)
    {
        $comment = new Comment;
        $comment->body = $request->get('comment_body');
        $comment->user()->associate($request->user());
        $post = Post::find($request->get('post_id'));
        $post->comments()->save($comment);

        return back();
    }
}

Okay, now if all is well then, we can now add the comments. Remember, we have not till now display the comments. Just complete the save functionality, whose parent_id is null.

Laravel Nested Set Example

 

Step 6: Display the comment.

Now, as we have set up the relationship between a Comment and a Post, we can easily pluck out all the comments related to a particular post.

So, write the following code inside the show.blade.php file. I am writing the whole file to display the comments. Remember, this is the parent comments. We still need to create a reply button and then show all the replies.

<!-- show.blade.php -->

@extends('layouts.app')

@section('content')
<div class="container">
    <div class="row justify-content-center">
        <div class="col-md-8">
            <div class="card">
                <div class="card-body">
                    <p><b></b></p>
                    <p>
                        
                    </p>
                    <hr />
                    <h4>Display Comments</h4>
                    @foreach($post->comments as $comment)
                        <div class="display-comment">
                            <strong></strong>
                            <p></p>
                        </div>
                    @endforeach
                    <hr />
                    <h4>Add comment</h4>
                    <form method="post" action="">
                        @csrf
                        <div class="form-group">
                            <input type="text" name="comment_body" class="form-control" />
                            <input type="hidden" name="post_id" value="" />
                        </div>
                        <div class="form-group">
                            <input type="submit" class="btn btn-warning" value="Add Comment" />
                        </div>
                    </form>
                </div>
            </div>
        </div>
    </div>
</div>
@endsection

Now, add the comment, and it will show us here in the same url.

Laravel Nesting Relationships

 

Step 7: Create a Reply form and save replies.

Now, we need to create a function called replies() inside Comment.php model.

<?php

// Comment.php

namespace App;

use Illuminate\Database\Eloquent\Model;

class Comment extends Model
{
    public function user()
    {
        return $this->belongsTo(User::class);
    }

    public function replies()
    {
        return $this->hasMany(Comment::class, 'parent_id');
    }
}

Here, in the replies function, we need to add a primary key as a parent_id because we need to fetch a reply based on a parent comment’s id.

Okay, now we need to write the display of all the comments and its replies code into the partial blade file.

The reason behind is that, we need to nest the comment replies and how much nesting is required depends upon the user interaction. So we can not predict the nesting levels.

To make more and more flexible, we need to create partials and then repeat that partial to display the nested comment replies.

First, create a partials folder inside resources >> views folder and inside partials folder, create one file called _comment_replies.blade.php.

Write the following code inside the _comment_replies.blade.php file.

<!-- _comment_replies.blade.php -->

 @foreach($comments as $comment)
    <div class="display-comment">
        <strong></strong>
        <p></p>
        <a href="" id="reply"></a>
        <form method="post" action="">
            @csrf
            <div class="form-group">
                <input type="text" name="comment_body" class="form-control" />
                <input type="hidden" name="post_id" value="" />
                <input type="hidden" name="comment_id" value="" />
            </div>
            <div class="form-group">
                <input type="submit" class="btn btn-warning" value="Reply" />
            </div>
        </form>
        @include('partials._comment_replies', ['comments' => $comment->replies])
    </div>
@endforeach

Here, I have displayed all the replies with the text box. So it can do further nesting.

Now, this partial is expect to parameters.

  1. comments
  2. post_id.

So, when we include this partial inside show.blade.php file, we do need to pass these both of the parameters so that we can access here.

Also, we need to define the route to save the reply.

Add the following line of code inside routes >> web.php file.

// web.php

Route::post('/reply/store', 'CommentController@replyStore')->name('reply.add');

So, our final web.php file looks like below.

<?php

// web.php

Route::get('/', function () {
    return view('welcome');
});

Auth::routes();

Route::get('/home', 'HomeController@index')->name('home');

Route::get('/post/create', 'PostController@create')->name('post.create');
Route::post('/post/store', 'PostController@store')->name('post.store');

Route::get('/posts', 'PostController@index')->name('posts');
Route::get('/post/show/{id}', 'PostController@show')->name('post.show');

Route::post('/comment/store', 'CommentController@store')->name('comment.add');
Route::post('/reply/store', 'CommentController@replyStore')->name('reply.add');



Also, define the replyStore() function inside CommentController.php file.

I am writing here the full code of the CommentController.php file.

<?php

// CommentController.php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use App\Comment;
use App\Post;

class CommentController extends Controller
{
    public function store(Request $request)
    {
        $comment = new Comment;
        $comment->body = $request->get('comment_body');
        $comment->user()->associate($request->user());
        $post = Post::find($request->get('post_id'));
        $post->comments()->save($comment);

        return back();
    }

    public function replyStore(Request $request)
    {
        $reply = new Comment();
        $reply->body = $request->get('comment_body');
        $reply->user()->associate($request->user());
        $reply->parent_id = $request->get('comment_id');
        $post = Post::find($request->get('post_id'));

        $post->comments()->save($reply);

        return back();

    }
}

So, here almost both of the function store and replyStore function is same. We are storing parent comment and its replies in the same table. But, when we are saving a parent comment, the parent_id becomes null, and when we store any reply, then parent_id becomes its comment_id. So that is the difference.

Finally, our show.blade.php file looks like this.

<!-- show.blade.php -->

@extends('layouts.app')
<style>
    .display-comment .display-comment {
        margin-left: 40px
    }
</style>
@section('content')

<div class="container">
    <div class="row justify-content-center">
        <div class="col-md-8">
            <div class="card">
                <div class="card-body">
                    <p><b></b></p>
                    <p>
                        
                    </p>
                    <hr />
                    <h4>Display Comments</h4>
                    @include('partials._comment_replies', ['comments' => $post->comments, 'post_id' => $post->id])
                    <hr />
                    <h4>Add comment</h4>
                    <form method="post" action="">
                        @csrf
                        <div class="form-group">
                            <input type="text" name="comment_body" class="form-control" />
                            <input type="hidden" name="post_id" value="" />
                        </div>
                        <div class="form-group">
                            <input type="submit" class="btn btn-warning" value="Add Comment" />
                        </div>
                    </form>
                </div>
            </div>
        </div>
    </div>
</div>
@endsection

Here, I have defined the CSS to display proper nesting.

Also, include the partials and pass the both of the parameters.

  1. Post comments.
  2. Post id

We can add the parent comment from here but can add the replies from the partials.

I have added the parent comment, and its replies and our database table looks like this.

Laravel Nested Set Database

 

Also, our final output looks like below.

Laravel 5.6 Polymorphic Nested Relationship Example

 

Finally, Create Comment Nesting In Laravel Tutorial With Example is over.

I have put the Github Code of Create Comment Nesting In Laravel so that you can check that out as well.

Github Code

Fork Me On Github

Steps To Use Code

  1. Clone the repository.
  2. Install the dependencies
  3. Configure the database.
  4. Migrate the database using this command: php artisan migrate
  5. Go to the register page and add the one user.
  6. Create the post and comment and reply on the comment.

via Planet MySQL
How To Create Comment Nesting In Laravel From Scratch

Comparing RDS vs EC2 for Managing MySQL or MariaDB on AWS

RDS is a Database as a Service (DBaaS) that automatically configures and maintains your databases in the AWS cloud. The user has limited power over specific configurations in comparison to running MySQL directly on Elastic Compute Cloud (EC2). But RDS is a convenient service, as long as you can live with the instances and configurations that it offers.

Amazon RDS currently supports various MySQL and MariaDB versions as well as the, MySQL-compatible Amazon Aurora DB engine. It does support replication, but as you may expect from a predefined web console, there are some limitations.

Amazon RDS Services

Amazon RDS Services

There are some tradeoffs when using RDS. These may not only affect the way you manage and provision your database instances, but also key things like performance, security, and high availability.

In this blog, we will take a look at the differences between using RDS and running MySQL on EC2, with focus on replication. As we will see, to decide between hosting MySQL on an EC2 instance or using Amazon RDS is not an easy task.

RDS Platform Tradeoffs

The biggest size of database that AWS can host depends on your source environment, the allocation of data in your source database, and how busy your system is.

Amazon RDS Environment options

Amazon RDS Environment options
Amazon RDS instance class

Amazon RDS instance class

AWS is split into regions. Every AWS account has limits, per region, on the number of AWS resources that can be created. Once a limit for a resource has been reached, additional calls to create that resource will fail.

AWS Regions

AWS Regions

For Amazon RDS MySQL DB instances, the maximum provisioned storage limit constrains the size of a table to a maximum size of 6 TB when using InnoDB file-per-table tablespaces.

InnoDB file-per-table feature is something that you should consider even if you are not looking to migrate a big database into the cloud. You may notice that some existing DB instances have a lower limit. For example, MySQL DB instances created prior to April 2014 have a file and table size limit of 2 TB. This 2-TB file size limit also applies to DB instances or Read Replicas created from DB snapshots taken before April 2014.

One of the key differences which affects the way you set up and maintain database replication is the lack of SUPER user. To address this limitation, Amazon introduced stored procedures that take care of various DBA tasks. Below are the key procedures to manage MySQL RDS replication.

Skip replication error:

CALL mysql.rds_skip_repl_error;

Stop replication:

CALL mysql.rds_stop_replication;

Start replication

CALL mysql.rds_start_replication;

Configures an RDS instance as a Read Replica of a MySQL instance running outside of AWS.

CALL mysql.rds_set_external_master;

Reconfigures a MySQL instance to no longer be a Read Replica of a MySQL instance running outside of AWS.

CALL mysql.rds_reset_external_master;

Imports a certificate. This is needed to enable SSL communication and encrypted replication.

CALL mysql.rds_import_binlog_ssl_material;

Removes a certificate.

CALL mysql.rds_remove_binlog_ssl_material;

Changes the replication master log position to the start of the next binary log on the master.

CALL mysql.rds_next_master_log;

While stored procedures take care of a number of tasks, it is a bit of a learning curve. Lack of SUPER privilege can also create problems in using external replication monitoring.

Amazon RDS does not currently support the following:

  • Global Transaction IDs
  • Transportable Table Space
  • Authentication Plugin
  • Password Strength Plugin
  • Replication Filters
  • Semi-synchronous Replication

Last but not least – access to the shell. Amazon RDS does not allow direct host access to a DB instance via Telnet, Secure Shell (SSH), or Windows Remote Desktop Connection (RDP). You can still use the client on an application host to connect to the DB via standard tools like mysql client.

There are other limitations, as described in the RDS documentation.

High availability with MySQL on EC2

There are options to operate MySQL directly on EC2, and thereby retain control of one’s high availability options. When going down this route, it is important to understand how to leverage the different AWS features that are at your disposal. Make sure you check out our ‘DIY Cloud Database’ white paper.

To automate deployment and management/maintenance tasks (while retaining control), it is possible to use ClusterControl. Just like with RDS, you have the convenience of deploying a database setup in a few minutes via a GUI. Adding nodes, scheduling backups, performing failovers, and so on, can also be conveniently done via the GUI.

Deployment

ClusterControl can automate deployment of different high availability database setups – from master-slave replication to multi-master clusters. All the main MySQL flavours are supported – Oracle MySQL, MariaDB and Percona Server. Some initial setup of VPC/security group is required, and these are well described in the DIY Cloud Database whitepaper. Note that similar concepts apply, whether it is AWS or Google Cloud or Azure

ClusterControl Deploy in EC2

ClusterControl Deploy in EC2

Galera Cluster is a good alternative to consider when deploying a highly available MySQL service. It has established itself as a credible replacement for traditional MySQL master-slave architectures, although it is not a drop-in replacement. Most applications can still be adapted to run on it. It is possible to define different segments for databases that span across multiple AWS regions.

ClusterControl expand cluster in EC2

ClusterControl expand cluster in EC2

It is possible to setup ‘hybrid replication’ by combining synchronous replication within a Galera Cluster and asynchronous replication between the cluster and one or more slaves. Options like delaying the slave gives an additional level of protection to the data.

ClusterControl Add replication in EC2

ClusterControl Add replication in EC2

Proxy layer

To achieve high availability, deploying a highly available setup is not enough. The applications have to somehow know which nodes are working and which ones are not. Changes in topology, e.g. moving a master to another host, also need to be propagated somehow so as to avoid errors in the application layer. ClusterControl supports deployments of proxies like HAProxy, MaxScale, and ProxySQL. For HAProxy and ProxySQL, there are additional options to deploy redundant instances with Keepalived and VirtualIP.

ClusterControl manager load balancers on EC2 nodes

ClusterControl manager load balancers on EC2 nodes

Cross-region replica

Amazon RDS provides read replica services. Cross-region replicas give you the ability to scale reads, as AWS has its services in a number of datacenters around the world. All read replicas are accessible and can be used for reading in a maximum number of five regions. These nodes are independent and can be used in your upgrade path, or can be promoted to standalone databases.

In addition to that, Amazon offers Multi-AZ deployments based on DRBD, synchronous disk replication. How is it different from Read Replicas? The main difference is that only the database engine on the primary instance is active, which leads to other architectural variations.

As opposed to read replicas, database engine version upgrades happen on the primary. Another difference is that AWS RDS will failover automatically with DRBD, while read replicas (using asynchronous replication) will require manual operations from you.

Multi-AZ failover on RDS uses a DNS change to point to the standby instance, according to Amazon this should happen within 60-120 seconds during the failover. Because the standby uses the same storage data as the primary, there will probably be transaction/log recovery. Bigger databases may spend a significant amount of time on InnoDB recovery, so please consider that in your DR plan and RTO calculation.

Of course, this goes with additional cost. Let’s take a look at some basic example. The cost of db.t2.medium host with 2vCPU, 4GB ram is 185.98 USD per month, the price will double when you enable Multizone (MZ) replica to 370.98 UDB. The price will vary by region but it will double in MZ.

Cost comparision
Cost comparision
Cost comparision
Cost comparision

Cost comparision

In order to achieve the same with EC2, you can deploy your virtual machines in different regions. Each AWS Region is completely independent. The setting of AWS Region can be changed in the console, by setting the EC2_REGION environment variable, or it can be overridden by using the –region parameter with the AWS Command Line Interface. When your set of servers are ready, you can use ClusterControl to deploy and monitor your replication. You can also manually set up replication through the console using standard commands.

Cross technology replication

It is possible to setup replication between an Amazon RDS MySQL or MariaDB DB instance and a MySQL or MariaDB instance that is external to Amazon RDS. This is done using standard replication method in mysql, through binary logs. To enable binary logs, you need to modify the my.cnf configuration. Without access to the shell, this task became impossible in RDS. It’s done in a not so obvious way. You have two options. One is to enable backups – set automated backups on your Amazon RDS DB instance with retention to higher than 0. Or enable replication to a prebuilt slave server. Both tasks will enable binary logs which you can later on use for your replication.

Enable binary logs via RDS backup

Enable binary logs via RDS backup

Maintain the binlogs in your master instance until you have verified that they have been applied on the replica. This maintenance ensures that you can restore your master instance in the event of a failure.

Another roadblock can be permissions. The permissions required to start replication on an Amazon RDS DB instance are restricted and not available to your Amazon RDS master user. Because of this, you must use the Amazon RDS mysql.rds_set_external_master and mysql.rds_start_replication commands to set up replication between your live database and your Amazon RDS database.

Monitor failover events for the Amazon RDS instance that is your replica. If a failover occurs, then the DB instance that is your replica might be recreated on a new host with a different network address. For information on how to monitor failover events, see Using Amazon RDS Event Notification.

In the below example, we will see how to enable replication from RDS to an external DB located on an EC2 instance.
You should have binary logs enabled, we use an RDS slave here.

Specify the number of hours to retain binary logs.

mysql -h RDS_MASTER -u<username> -u<password>
call mysql.rds_set_configuration('binlog retention hours', 7);

On RDS MASTER, create replication user with the following commands:

CREATE USER 'repl'@'ec2DBslave' IDENTIFIED BY 's3cr3tp4SSw0rd';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'ec2DBslave';

On RDS SLAVE, run the commands:

mysql -u<username> -u<password> -h RDS_SLAVE
call mysql.rds_stop_replication;
SHOW SLAVE STATUS;  Exec_Master_Log_Pos, Relay_Master_Log_File.

On RDS SLAVE, run mysqldump with the following format:

mysqldump -u<username> -u<password> -h RDS_SLAVE --routines --triggers --single-transaction --databases DB1 DB2 DB3 > mysqldump.sql

Import the DB dump to external database:

mysql -u<username> -u<password> -h ec2DBslave
tee import_database.log;
source mysqldump.sql;
CHANGE MASTER TO 
 MASTER_HOST='RDS_MASTER', 
 MASTER_USER='repl',
 MASTER_PASSWORD='s3cr3tp4SSw0rd',
 MASTER_LOG_FILE='<Relay_Master_Log_File>',
 MASTER_LOG_POS=<Exec_Master_Log_Pos>;

Create a replication filter to ignore tables created by AWS only on RDS

CHANGE REPLICATION FILTER REPLICATE_WILD_IGNORE_TABLE = ('mysql.rds\_%');

Start replication

START SLAVE;

Verify replication status

SHOW SLAVE STATUS;

That’s it for now. Managing MySQL on AWS is a big topic. Do let us know your thoughts in the comments section below.

via Planet MySQL
Comparing RDS vs EC2 for Managing MySQL or MariaDB on AWS

What Is Target Disk Mode? How and When to Use It on Your Mac

Advertisement

Every Mac can use a variety of boot modes and startup key combinations


A Quick Guide to macOS Boot Modes and Startup Key Combinations




A Quick Guide to macOS Boot Modes and Startup Key Combinations

Your Mac has several startup key combinations that unlock a variety of boot modes for troubleshooting. Here’s a guide to what they all do.
Read More

. One of these is Target Disk Mode, which essentially turns your Mac into an external hard drive.

By connecting two Macs together in this way, you can quickly transfer files, migrate your data to a new Mac, or access your startup disk when macOS refuses to boot. While regular backups are always essential, Target Disk Mode provides added peace of mind in case disaster strikes.

Let’s take a deeper look at what Target Disk Mode is and the different ways you can use it to your benefit.

What Is Target Disk Mode?

Target Disk Mode is a boot mode which allows you to browse and transfer files to and from a Mac’s internal drive without booting macOS. Volumes mount virtually and instantly, and the use of a cable means that transfers are significantly faster than equivalent wireless methods.

Target Disk Mode Starting Mac

You cannot use the target Mac while it is in Target Disk Mode. Your Mac essentially becomes an enclosure for your internal drive. In order to use your Mac again, you’ll need to disconnect and reboot as normal.

Target Disk Mode was first introduced with the PowerBook 100 in 1991 and has made it into most Mac models since then. Exceptions include the tray-loading iMac, Power Macintosh G3 and G4, models of iBook G3 without FireWire, the first MacBook Air (2008-2009), and old unibody MacBook.

What You Need to Use Target Disk Mode

You’ll need two compatible Mac computers in order to use Target Disk Mode, each with a FireWire or Thunderbolt interface. You’ll also need a cable and any necessary adapters (like Thunderbolt to FireWire, or Thunderbolt 2 to Thunderbolt 3).

FireWire and Thunderbolt logos

You cannot use plain old USB type-A connectors (not even USB 3.0), but old Thunderbolt and FireWire connections play nicely with the latest standards. Be aware that Thunderbolt cables aren’t cheap. Apple is currently asking $39 for a 2.6-foot-long Thunderbolt 3 cable.

If you’re using a recent Mac, like the post-2017 MacBook Pro or slim iMac, make sure you pick a genuine Thunderbolt 3 cable and not a new-shape USB cable (or Apple’s charger). We’ve put together a guide to help you understand the differences between USB type-C and Thunderbolt cables


Making Sense of USB-C and Thunderbolt Cables and Ports on Your MacBook




Making Sense of USB-C and Thunderbolt Cables and Ports on Your MacBook

Wondering what USB-C and Thunderbolt are, and how these types of cables affect your MacBook? Here’s everything you need to know about your MacBook ports.
Read More

.

How to Use Target Disk Mode on Mac

When using Target Disk Mode, each Mac takes on a different role:

  • Target: This is the Mac that contains the disk you want to access. You won’t be able to do anything using this Mac, since it will remain in Target Disk Mode for the duration of the operation.
  • Host: This is the Mac which will be accessing the drive. It will boot into macOS as normal so you can transfer files.

1. Connect Your Two Machines

Take your cable and connect both computers via the relevant Thunderbolt or FireWire ports. Connect any adapters you need for older machines. If you’re performing this operation on a MacBook, make sure it has enough power for the duration of the transfer or connect it to a power source.

Apple Thunderbolt 3 cable

2. Start the Target Mac in Target Disk Mode

You can do this two ways:

  • Shut down your target Mac, hit the power button, then press T and hold it while your Mac boots. You can let go when you see a Thunderbolt or FireWire icon on screen.
  • If your target Mac is already running, head to System Preferences > Startup Disk and click on Target Disk Mode to force a restart into Target Disk Mode. No need to hold any keys down here.

Apple Smart Keyboard Hold T

3. Decrypt and Access Your Drive

Wait for macOS to detect your target Mac’s drive. If your target drive is encrypted with FileVault


What Is Mac OS X FileVault & How Do I Use It?




What Is Mac OS X FileVault & How Do I Use It?

Only by manually encrypting the files on your hard drive can you truly keep your files safe. That’s where the Mac OS X FileVault comes in.
Read More

, you’ll need to enter your password when you start up the target Mac. Wait for the drive to decrypt, then it should show up like any other external drive.

4. Copy, Transfer, and Disconnect

Use Finder to browse files, copy to and from the drive, and then safely eject your drive. You can do this by dragging your target Mac’s drive icon over the Trash can, or by right-clicking the drive and selecting Eject.

Eject Drive macOS

On your target Mac, press the power button to power down the machine. You can now restart this machine as normal, if you want.

When to Use Target Disk Mode on Mac

Now that you know how to use target disk mode, you should familiarize yourself with some of the applications for this boot mode.

Quick Wired File Transfers

If you’re used to transferring files between computers using intermediary media like an external hard drive, why not use Target Disk Mode instead? There’s no need to copy from your Mac to a USB volume, then from the USB volume to your destination—simply move from Mac to Mac.

This is most useful for large files like videos, media libraries, disk images, and so on. A wired transfer via Thunderbolt is much faster than a similar wireless transfer using the notoriously buggy AirDrop


AirDrop Not Working? Troubleshoot Your File Transfer Woes




AirDrop Not Working? Troubleshoot Your File Transfer Woes

Having troubles with AirDrop? We can step you through all of your available options.
Read More

.

Transferring Data With macOS Migration Assistant

If you’ve bought a new Mac, you’re going to want to transfer your old data to it. There’s no faster way to achieve this than with Target Disk Mode. In this scenario, your new Mac (which you’re transferring data to) is the host and your old Mac (which you want to pull data from) is the target.

Connect the target and host, boot the target into Target Disk Mode as normal, then on the host launch Migration Assistant under Utilities. Select From another Mac, PC, Time Machine backup, or other disk then select From a Mac, Time Machine backup, or startup disk.

Migration Assistant in macOS

When prompted, select your target Mac’s drive and hit Continue to start the transfer process.

Recovering Files When macOS Won’t Boot

Operating system failures happen to the best of us. Whether it’s the result of a botched macOS upgrade


macOS Sierra Installation Issues? Fix Common Mac Upgrade Problems




macOS Sierra Installation Issues? Fix Common Mac Upgrade Problems

How do you spot a frozen installation? What if your Mac runs out of space mid-install? And how do you fix the problem when you can’t access your computer normally any more?
Read More

, or a dodgy kernel extension that’s preventing your system from booting, connect your problem Mac in Target Disk Mode and breathe easy.

Once you’ve mounted your Mac’s drive, you can start to copy the important files, media libraries, and work documents you forgot to back up. If you’ve got enough space, you could grab your entire /Users/ folder!

Copy Users folder

Run a Target Mac’s Operating System on the Host

What if you have a MacBook with a broken screen or dodgy keyboard? Using Target Disk Mode, you can use a host Mac to boot a target’s operating system. This will restore access to your damaged Mac so you can recover files, wipe the hard drive, and do anything else you need to do.

Connect the two machines as normal, and launch your broken (target) Mac in Target Disk Mode. Now reboot the host machine and enter Startup Manager by holding Option as your host Mac boots. You’ll see your target Mac’s drive appear in the boot menu. Select it, and your host will boot your target’s drive as normal.

Apple Smart Keyboard

You’ll need to know the FileVault password in order to decrypt the drive if you’re using it. From here, it’s possible to recover files, run applications, and prepare your machine for repair.

Limitations of Target Disk Mode on Mac

Target Disk Mode offers real peace of mind and some everyday benefits—you’ll just have to remember to use it! But if you’ve really trashed your target Mac, this can’t help fix your problems.

This is because Target Disk Mode will only work if your drive is operational. If you have a faulty drive


External Hard Drive Not Showing Up on Mac? Here’s How to Fix It




External Hard Drive Not Showing Up on Mac? Here’s How to Fix It

External hard drive not showing up on your Mac? Here’s a troubleshooting guide to help get your external hard drive or flash drive working again.
Read More

, be prepared for some issues. Damage to Thunderbolt and FireWire ports will also make this tricky, as well as logic board issues that might prevent normal operation of these ports.



via MakeUseOf.com
What Is Target Disk Mode? How and When to Use It on Your Mac

Weird Al Plays 77 Cover Songs

Weird Al Plays 77 Cover Songs

Link

Throughout his Ridiculously Self-Indulgent, Ill-Advised Vanity Tour, musician “Weird Al” Yankovic and his band performed a different cover song during each night’s encore, from Manfred Mann’s Do Wah Diddy Diddy to Alice Cooper’s School’s Out.

via The Awesomer
Weird Al Plays 77 Cover Songs

Metacat: Making Big Data Discoverable and Meaningful at Netflix

by Ajoy Majumdar, Zhen Li

Most large companies have numerous data sources with different data formats and large data volumes. These data stores are accessed and analyzed by many people throughout the enterprise. At Netflix, our data warehouse consists of a large number of data sets stored in Amazon S3 (via Hive), Druid, Elasticsearch, Redshift, Snowflake and MySql. Our platform supports Spark, Presto, Pig, and Hive for consuming, processing and producing data sets. Given the diverse set of data sources, and to make sure our data platform can interoperate across these data sets as one “single” data warehouse, we built Metacat. In this blog, we will discuss our motivations in building Metacat, a metadata service to make data easy to discover, process and manage.

Objectives

The core architecture of the big data platform at Netflix involves three key services. These are the execution service (Genie), the metadata service, and the event service. These ideas are not unique to Netflix, but rather a reflection of the architecture that we felt would be necessary to build a system not only for the present, but for the future scale of our data infrastructure.

Many years back, when we started building the platform, we adopted Pig as our ETL language and Hive as our ad-hoc querying language. Since Pig did not natively have a metadata system, it seemed ideal for us to build one that could interoperate between both.

Thus Metacat was born, a system that acts as a federated metadata access layer for all data stores we support. A centralized service that our various compute engines could use to access the different data sets. In general, Metacat serves three main objectives:

  • Federated views of metadata systems
  • Unified API for metadata about datasets
  • Arbitrary business and user metadata storage of datasets

It is worth noting that other companies that have large and distributed data sets also have similar challenges. Apache Atlas, Twitter’s Data Abstraction Layer and Linkedin’s WhereHows (Data Discovery at Linkedin), to name a few, are built to tackle similar problems, but in the context of the respective architectural choices of the companies.

Metacat

Metacat is a federated service providing a unified REST/Thrift interface to access metadata of various data stores. The respective metadata stores are still the source of truth for schema metadata, so Metacat does not materialize it in its storage. It only directly stores the business and user-defined metadata about the datasets. It also publishes all of the information about the datasets to Elasticsearch for full-text search and discovery.

At a higher level, Metacat features can be categorized as follows:

  • Data abstraction and interoperability
  • Business and user-defined metadata storage
  • Data discovery
  • Data change auditing and notifications
  • Hive metastore optimizations

Data Abstraction and Interoperability

Multiple query engines like Pig, Spark, Presto and Hive are used at Netflix to process and consume data. By introducing a common abstraction layer, datasets can be accessed interchangeably by different engines. For example: A Pig script reading data from Hive will be able to read the table with Hive column types in Pig types. For data movement from one datastore to another, Metacat makes the process easy by helping in creating the new table in the destination data store using the destination table data types. Metacat has a defined list of supported canonical data types and has mappings from these types to each respective data store type. For example, our data movement tool uses the above feature for moving data from Hive to Redshift or Snowflake.

The Metacat thrift service supports the Hive thrift interface for easy integration with Spark and Presto. This enables us to funnel all metadata changes through one system which further enables us to publish notifications about these changes to enable data driven ETL. When new data arrives, Metacat can notify dependent jobs to start.

Business and User-defined Metadata

Metacat stores additional business and user-defined metadata about datasets in its storage. We currently use business metadata to store connection information (for RDS data sources for example), configuration information, metrics (Hive/S3 partitions and tables), and tables TTL (time-to-live) among other use cases. User-defined metadata, as the name suggests, is a free form metadata that can be set by the users for their own usage.

Business metadata can also be broadly categorized into logical and physical metadata. Business metadata about a logical construct such as a table is considered as logical metadata. We use metadata for data categorization and for standardizing our ETL processing. Table owners can provide audit information about a table in the business metadata. They can also provide column default values and validation rules to be used for writes into the table.

Metadata about the actual data stored in the table or partition is considered as physical metadata. Our ETL processing stores metrics about the data at job completion, which is later used for validation. The same metrics can be used for analyzing the cost + space of the data. Given two tables can point to the same location (like in Hive), it is important to have the distinction of logical vs physical metadata because two tables can have the same physical metadata but have different logical metadata.

Data Discovery

As consumers of the data, we should be able to easily browse through and discover the various data sets. Metacat publishes schema metadata and business/user-defined metadata to Elasticsearch that helps in full-text search for information in the data warehouse. This also enables auto-suggest and auto-complete of SQL in our Big Data Portal SQL editor. Organizing datasets as catalogs helps the consumer browse through the information. Tags are used to categorize data based on organizations and subject areas. We also use tags to identify tables for data lifecycle management.

Data Change Notification and Auditing

Metacat, being a central gateway to the data stores, captures any metadata changes and data updates. We have also built a push notification system around table and partition changes. Currently, we are using this mechanism to publish events to our own data pipeline (Keystone) for analytics to better understand our data usage and trending. We also publish to Amazon SNS. We are evolving our data platform architecture to be an event-driven architecture. Publishing events to SNS allows other systems in our data platform to “react” to these metadata or data changes accordingly. For example, when a table is dropped, our S3 warehouse janitor services can subscribe to this event and clean up the data on S3 appropriately.

Hive Metastore Optimizations

The Hive metastore, backed by an RDS, does not perform well under high load. We have noticed a lot of issues around writing and reading of partitions using the metatore APIs. Given this, we no longer use these APIs. We have made improvements in our Hive connector that talks directly to the backed RDS for reading and writing partitions. Before, Hive metastore calls to add a few thousand partitions usually timed out, but with our implementation, this is no longer a problem.

Next Steps

We have come a long way on building Metacat, but we are far from done. Here are some additional features that we still need to work on to enhance our data warehouse experience.

  • Schema and metadata versioning to provide the history of a table. For example, it is useful to track the metadata changes for a specific column or be able to view table size trends over time. Being able to ask what the metadata looked like at a point in the past is important for auditing, debugging, and also useful for reprocessing and roll-back use cases.
  • Provide contextual information about tables for data lineage. For example, metadata like table access frequency can be aggregated in Metacat and published to a data lineage service for use in ranking the criticality of tables.
  • Add support for data stores like Elasticsearch and Kafka.
  • Pluggable metadata validation. Since business and user-defined metadata is free form, to maintain integrity of the metadata, we need validations in place. Metacat should have a pluggable architecture to incorporate validation strategies that can be executed before storing the metadata.

As we continue to develop features to support our use cases going forward, we’re always open to feedback and contributions from the community. You can reach out to us via Github or message us on our Google Group. We hope to share more of what our teams are working on later this year!

And if you’re interested in working on big data challenges like this, we are always looking for great additions to our team. You can see all of our open data platform roles here.


Metacat: Making Big Data Discoverable and Meaningful at Netflix was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

via Planet MySQL
Metacat: Making Big Data Discoverable and Meaningful at Netflix

IP platform PatSnap picks up $38M from Sequoia and Xiaomi founder’s fund

PatSnap, a Euro-Asian company that offers a patent and R&D platform and services, has pulled in a $38 million Series D funding round led by existing investors Sequoia and Shunwei Capital, the investment firm founded by Xiaomi co-founder and CEO Lei Jun. Southeast Asia’s Qualgro also took part.

All three backed the company in 2016 when it led an undisclosed Series C round. While PatSnap didn’t give a figure for that previous round, it is saying this time around that it has raised over $100 million to date. Doing some quick via math via figures on Crunchbase suggests that the Series C was something in the region of $50 million.

PatSnap was founded in 2007 and it is based out of the UK and Singapore, with locations in China and the U.S.. The company started out as essentially a directory for IP, helping companies — and particularly enterprises — pull in data for R&D and product development purposes.

The company claims 8,000 clients worldwide, with the U.S. its largest market for revenue. PatSnap said that in China, its second-largest market and a major focus for the firm, it said it has more than 4,500 clients. In addition to its core service, it is focused on going beyond a data repository to offer services for enterprises that help manage internal product development and other R&D initiatives.

“Patent data let us kick down the door and earn respect, but now we’re looking at completely different products,” Ray Chohan, SVP of corporate strategy at PatSnap told TechCrunch in an interview. “We are working on new products for R&D with a long-term view of becoming the software stack for R&D teams.”

That’s exactly how this new capital will be put to work, Tiong said. Related to that, the company plans to open an office in Toronto, Canada, for development. Already, the company has 700 staff across a range of offices that include London (commercial), China (product), Singapore (machine learning) and LA (go to market).

Series D is a fairly advanced stage for a startup in Southeast Asia (and London) and exits are something that the tech industry is giving more thought to given the growth of the ecosystem, and events such as Sea’s U.S. listing last year. Despite that, Chohan — who founded the company’s London-based office — said that he’s not thinking too hard about the future for now.

“Our obsession is our employees, customers and building great products, if we can do that then the byproduct of a liquidity event will happen by itself,” he explained.

Chohan added that PatSnap is “well funded” and on course to become profitable over the next two to three years.


via TechCrunch
IP platform PatSnap picks up $38M from Sequoia and Xiaomi founder’s fund

Getting the most out of Steel Targets: What You Need to Know

Steel Targets can be an awesome addition to any training regime, whether it be for self-defense, long distance shooting, any sort of competition, or just plain bragging rights at the range with buddies. Due to their reactive “CLANG”, steel targets give shooters instant feedback on whether or not the fundamentals are being properly applied. But […]

Read More …

The post Getting the most out of Steel Targets: What You Need to Know appeared first on The Firearm Blog.


via The Firearm Blog
Getting the most out of Steel Targets: What You Need to Know

Beware ‘founder-friendly’ VCs — 3 steps founders should take to protect their companies

In 2014, it seemed like pretty much anyone with a pulse and pitch deck was capable of raising huge amounts of capital from prestigious venture capital firms at sky-high valuations. Here we are four years later and times have changed. VCs inked a little more than 3,100 deals in the last quarter of 2017, according to Crunchbase — about 500 fewer than the previous quarter.

For aspiring startup founders, it’s a “confusing time in the so-called Unicorn story,” as Erin Griffith put it in a column last May — an asset bubble that never really popped, but which at the very least is deflating. In the confirmation hearing for new SEC Chairman Jay Clayton, lawmakers lamented the dearth of initial public offerings as companies that thrived in private markets — from Snap to Blue Apron — have struggled to deliver meaningful returns to investors.

This all creates a number of dilemmas for founders looking to raise capital and scale businesses in 2018. VCs remain an integral part of the innovation ecosystem. But what happens when the changing dynamics of financial markets collide with VCs’ expectations regarding growth? VCs may not always be aligned with founders and companies in this new environment. A recent study commissioned by Eric Paley at Founder Collective found that by pressuring companies to scale prematurely, venture capitalists are indirectly responsible for more startup deaths than founder infighting, technical debt and slow customer adoption — combined.

The new landscape requires that founders in particular be judicious in the way they seek out new sources of capital, structure cap tables and ownership and the types of concessions made to their new backers in exchange for that much-needed cash. Here are three ways founders can ensure they’re looking out for what’s best for their companies — and themselves — in the long run.

Take time to backchannel

Venture capitalists are arguably in the business of due diligence. Before they sign the dotted line, they can be expected to call your competitors, your customers, your former employers, your business school classmates — they will ask everyone and their mother about you.

It goes without saying that differences of opinion regarding your business strategy can lead to big conflict down the road.

A first-time founder is also new to the pressures of entrepreneurship, of having employees rely on you for their livelihoods. Whether you are desperate for cash because you need to make payroll, or you’re anxious for the validation of a headline-worthy investment, few founders take the time to properly backchannel their investors. Until you can say you’ve done due diligence of your own, your opinion of your VCs is going to be based on the size of their fund, the deals they’ve done or the press they’ve gotten. In short, it will likely be based on what they’ve done right.

On the other hand, you likely don’t know anything about the actual partner that will join your board. Are they intelligent in your space? Do they have a meaningful network? Or do they just know a few headhunters? Are they value creators? What is their political standing in their firm? Before you sign a term sheet, you need to take the time to contextualize the profile of the person who is taking a board seat. It gives you foresight on the actions your investment partner will likely take down the road.

Think beyond your first raise

If you do decide to raise capital, make sure you are in alignment with your board regarding your business plan, the pursuit of profit at the expense of revenue growth, or vice versa, and how it will steer your decision making as the market changes. It goes without saying that differences of opinion regarding your business strategy can lead to big conflict down the road.

As you think about these trade-offs, remember that as an entrepreneur, your obligation is to the existing shareholders: the employees and you. As the pack of potential unicorns has thinned, VCs in particular have turned to unconventional deal structures, like the use of common and preferred shares. For the founder who needs to raise cash, a dual ownership structure seems like a fair compromise to make, but remember that it may be at the expense of your employees’ option pool. The interests of preferred and common shareholders are not perfectly aligned, particularly when it comes time to make difficult decisions in the future.

Is VC money right for you?

VCs frequently share information, board decks and investor presentations with members of the press and the tech community, sometimes in support of their own personal agendas or to get perspective on whether to invest or not. That’s why it’s particularly important to backchannel, and more importantly, that you have allies that you can call on and people who can ensure some measure of goodwill. A good company board cannot be made up of just the investors and you: You need advocates that are balanced and on your side.

Venture capital is far from the only way to finance an early-stage business.

These prescriptions can sound paranoid, particularly to the founder whose business is growing nicely. But anything can cause a sea change and put you at odds with the people funding your company — who now own a piece of the company that you’re trying to build. When disagreements arise, it can get tense. They might say that you are a first-time founder, and therefore a novice. They will make your weaknesses known and say you’ll never be able to raise again if you ignore their invaluable advice. It’s important that you don’t fall into the fear trap. If you create a product or service that solves an undeniable problem, the money will come — and you will get funded again.

The term founder-friendly VC was always perhaps a bit of a misnomer. The people building the business and the people planning on cashing in on your efforts are imperfect allies. As a founder and business owner, your primary responsibilities are to your clients, to the company you’re building and, most importantly, to the employees who are helping you do it. As founders we like to think that we have all the answers, especially in bad times. Making sure you have alignment with your investors in challenging and unpredictable situations is critical. It’s important to anticipate how your investors will problem-solve before you give up control.

Venture capital is far from the only way to finance an early-stage business. Founders looking to jump-start their business have a number of alternatives, from debt financing and bootstrapping to crowdfunding, angel investors and ICOs. There are indeed still many advantages to having experienced investors on your side, not simply the cash but also the access to hiring and industry knowledge. But the relationship can only benefit both parties when founders go in eyes wide open.


via TechCrunch
Beware ‘founder-friendly’ VCs — 3 steps founders should take to protect their companies