Lighting a Match With a Rubber Band Is a Handy Trick

Lighting a Match With a Rubber Band Is a Handy Trick

If your five o’clock shadow isn’t rugged enough to light a match with, and there’s no where else to strike one, a rubber band and a second match are all you need to get a fire started. This one’s going to require some practice to master, and some amateur sniper skills, but MacGyver would be proud.


SPLOID is delicious brain candy. Follow us on Facebook, Twitter, and YouTube.

via Gizmodo
Lighting a Match With a Rubber Band Is a Handy Trick

Best Practices for Configuring Optimal MySQL Memory Usage

In this blog post, we’ll discuss some of the best practices for configuring optimal MySQL memory usage.
Correctly configuring the use of available memory resources is one of the most important things you have to get right with MySQL for optimal performance and stability. As of MySQL 5.7, the default configuration uses a very limited amount of memory – leaving defaults is one of the worst things you can do. But configuring it incorrectly can result in even worse performance (or even crashes).
The first rule of configuring MySQL memory usage is you never want your MySQL to cause the operating system to swap. Even minor swapping activity can dramatically reduce MySQL performance. Note the keyword “activity” here. It is fine to have some used space in your swap file, as there are probably parts of the operating system that are unused when MySQL is running, and it’s a good idea to swap them out. What you don’t want is constant swapping going on during the operation, which is easily seen in the “si” and “so” columns in the vmstat output.
Example: No Significant Swapping
Example:  Heavy Swapping Going
If you’re running Percona Monitoring and Management, you can also look into the Swap Activity graph in System Overview Dashboard.
If you have spikes of more than 1MB/sec, or constant swap activity, you might need to revisit your memory configuration.
MySQL Memory allocation is complicated. There are global buffers, per-connection buffers (which depend on the workload), and some uncontrolled memory allocations (i.e., inside Stored Procedures), all contributing to difficulties in computing how much memory MySQL will really use for your workload. It is better to check it by looking at the virtual memory size (VSZ) that MySQL uses. You can get it from “top”, or by running ps aux | grep mysqld.mysql     3939 30.3 53.4 11635184 8748364 ?    Sl   Apr08 9106:41 /usr/sbin/mysqldThe 5th column here shows VSZ usage (about 11GB).
Note that the VSZ is likely to change over time. It is often a good idea to plot it in your monitoring system and set an alert to ping you when it hits a specified threshold. Don’t allow the mysqld process VSZ exceed 90% of the system memory (and less if you’re running more than just MySQL on the system).
It’s a good idea to start on the safe side by conservatively setting your global and per connections buffers, and then increase them as you go. Many can be set online, including innodb_buffer_pool_size in MySQL 5.7.
So how do you decide how much memory to allocate to MySQL versus everything else? In most cases you shouldn’t commit more than 90% of your physical memory to MySQL, as you need to have some reserved for the operating system and things like caching binary log files, temporary sort files, etc.
There are cases when MySQL should use significantly less than 90% of memory:
If there are other important processes running on the same server, either all the time or periodically. If you have heavy batch jobs run from cron, which require a lot of memory, you’ll need to account for that.
If you want to use OS caching for some storage engines. With InnoDB, we recommend innodb_flush_method=O_DIRECT  in most cases, which won’t use Operating System File Cache. However, there have been cases when using buffered IO with InnoDB made sense. If you’re still running MyISAM, you will need OS cache for the “data” part of your tables. With TokuDB, using OS cache is also a good idea for some workloads.
If your workload has significant demands, Operating System Cache – MyISAM on disk temporary tables, sort files, some other temporary files which MySQL creates the need to be well-cached for optimal performance.
Once you know how much memory you want the MySQL process to have as a whole, you’ll need to think about for what purpose the memory should be used inside MySQL.The first part of memory usage in MySQL is workload related – if you have many connections active at the same time that run heavy selects using a lot of memory for sorting or temporary tables, you might need a lot of memory (especially if Performance Schema is enabled). In other cases this amount of memory is minimal. You’ll generally need somewhere between 1 and 10GB for this purpose.
Another thing you need to account for is memory fragmentation. Depending on the memory allocation library you’re using (glibc, TCMalloc, jemalloc, etc.), the operating system settings such as Transparent Huge Pages (THP) and workload may show memory usage to grow over time (until it reaches some steady state). Memory fragmentation can also account for 10% or more of additional memory usage.
Finally, let’s think about various global buffers and caching. In typical cases, you mainly only have innodb_buffer_pool_size to worry about. But you might also need to consider key_buffer_size,  tokudb_cache_size, query_cache_size  as well as table_cache and table_open_cache. These are also responsible for global memory allocation, even though they are not counted in bytes. Performance _Schema may also take a lot of memory, especially if you have a large number of connections or tables in the system.
When you specify the size of the buffers and caches, you should determine what you’re specifying. For innodb_buffer_pool_size, remember there is another 5-10% of memory that is allocated for additional data structures – and this number is larger if you’re using compression or set innodb_page_size smaller than 16K. For tokudb_cache_size, it’s important to remember that the setting specified is a guide, not a “hard” limit: the cache size can actually grow slightly larger than the specified limit.
For systems with large amounts of memory, the database cache is going to be by far the largest memory consumer, and you’re going to allocate most of your memory to it. When you add extra memory to the system, it is typically to increase the database cache size.
Let’s do some math for a specific example. Assume you have a system (physical or virtual) with 16GB of memory. We are only running MySQL on this system, with an InnoDB storage engine and use innodb_flush_method=O_DIRECT, so we can allocate 90% (or 14.4GB) of memory to MySQL. For our workload, we assume connection handling and other MySQL connection-based overhead will take up 1GB (leaving 13.4GB). 0.4GB is likely to be consumed by various other global buffers (innodb_log_buffer_size, Table Caches, other miscellaneous needs, etc.), which now leaves 13GB. Considering the 5-7% overhead that the InnodB Buffer Pool has, a sensible setting is innodb_buffer_pool_size=12G – what we very commonly see working well for systems with 16GB of memory.
Now that we have configured MySQL memory usage, we also should look at the OS configuration. The first question to ask is if we don’t want MySQL to swap, should we even have the swap file enabled?  In most cases, the answer is yes – you want to have the swap file enabled (strive for 4GB minimum, and no less than 25% of memory installed) for two reasons:
The operating system is quite likely to have some portions that are unused when it is running as a database server. It is better to let it swap those out instead of forcing it to keep it in memory.
If you’ve made a mistake in the MySQL configuration, or you have some rogue process taking much more memory than expected, it is usually a much better situation to lose performance due to a swap then to kill MySQL with an out of memory (OOM) error – potentially causing downtime.
As we only want the swap file used in emergencies, such as when there is no memory available or to swap out idle processes, we want to reduce Operating System tendency to swap   (echo 1 >  /proc/sys/vm/swappiness). Without this configuration setting you might find the OS swapping out portions of MySQL just because it feels it needs to increase the amount of available file cache (which is almost always a wrong choice for MySQL).
The next thing when it comes to OS configuration is setting the Out Of Memory killer. You may have seen message like this in your kernel log file:
Apr 24 02:43:18 db01 kernel: Out of memory: Kill process 22211 (mysqld) score 986 or sacrifice child
When MySQL itself is at fault, it’s pretty rational thing to do. However, it’s also possible the real problem was some of the batch activities you’re running: scripts, backups, etc. In this case, you probably want those processes to be terminated if the system does not have enough memory rather than MySQL.
To make MySQL a less likely candidate to be killed by the OOM killer, you can adjust the behavior to make MySQL less preferable with the following:
echo ‘-800’ > /proc/$(pidof mysqld)/oom_score_adj
This will make the Linux kernel prefer killing other heavy memory consumers first.
Finally on a system with more than one CPU socket, you should care about NUMA when it comes to MySQL memory allocation. In newer MySQL versions, you want to enable innodb_numa_interleave=1. In older versions you can either manually run numactl –interleave=all  before you start MySQL server, or use the numa_interleave configuration option in Percona Server.
 
via Planet MySQL
Best Practices for Configuring Optimal MySQL Memory Usage

My slides about MySQL Performance from #PerconaLive Apr.2016 US

As promised, here are my slides from Percona Live Conference in US, Apr.2016 :
MySQL 5.7 Performance & Scalability Benchmarks (PDF)
MySQL 5.7 Demystified Tuning (PDF)
Feel free to ask any questions or details you’re needing, etc..Also, not really related to MySQL, but as I was asked so many times about "how did you manage to project your slides from Mac, but drive it an annotate via iPad?" – here is a short HOWTO:
you need to have Keynote app installed on both your Mac and iPad
you create your own WiFi Network on your Mac (MenuBar->WiFi->Create Network…)
once done, connect to this WiFi Network your iPad
(having your own network is getting a rid of any potential sync issues, removing any dependency on wifi availability in a room, as well allowing you to walk way far from your Mac and still keep a control on your slides ;-))
then you’re starting your Keynote presentation projection on your Mac
after what opening Keynote app on your iPad
"clicking" on Keynote Remote
selecting your Mac from the list of available devices
and you’re getting hands on your currently projected slides ;-))
you can select then a preferred layout: current slide, current + next, current + notes, etc.
AND on any slide you can involve an annotation and draw over the slide with pencils of different color to point on one or another part of your slides
(of course, the drawing you’re doing remains only during annotation and not destroying your slides ;-))
have fun! ;-))
What else to say? The conference was really great and I may only admit that Percona is doing it better and better from year to year.. Huge amount of very interesting talks, great technical content mostly everywhere, a lot of innovation, new ideas, deep discussions, etc. etc.. — you don’t know what you’re missing if you were not there ;-))Well, time for the rest now, and as a final point – a "Bloody Cheesecake" on my departure from SFO Airport (for those who understand ;-))Rgds,-Dimitri
via Planet MySQL
My slides about MySQL Performance from #PerconaLive Apr.2016 US

The best refrigerator

By Liam McCabe

This post was done in partnership with The Sweethome, a buyer’s guide to the best things for your home. Read the full article here.

We think the best refrigerator for you is most likely the Whirlpool WRF535SMBM—a reliable, affordable, French door fridge that fits a space 36 inches wide. In addition, our 65 hours of research has shown it to be the most common size and style of fridge bought in America today.

How we picked

The two most popular refrigerator styles these days are French door (left) and the classic top freezer (right).

Picking a fridge really boils down to personal taste because most fridges work well. Depending on your budget and the amount of space you have, you’ll have dozens of fridges with different features and styles to choose from. In this guide, we recommend a few refrigerators at the most popular sizes, styles, and price points. Maybe one of them is a good fit for your home.

On the other hand, you might need a narrower or shallower fridge than we recommend, or maybe you just prefer a different style. We know that we can’t account for all the make-or-break factors for every kitchen and every family. If our picks don’t suit you or you just want to double-check our criteria, check out the "How to buy a fridge" section in our full guide.

For the specific models that we recommend, we focused on a handful of the most popular size-style-price combinations, which we gleaned from sales data provided by industry groups and manufacturers, the bestseller lists on retailers’ websites, and anecdotes from salespeople and repair technicians. We only recommend models that are available from multiple national retailers, so you should be able to find any of these in your area most of the time.

Because we don’t have the means to test fridges on our own, we got some hands-on time by checking them out in showrooms in the Boston metro area, including Sears, Lowe’s, Home Depot, and Yale Appliance + Lighting. We also considered certain details from editorial reviews, like notes about noise levels. But we mainly relied on user reviews—thousands of them—for info about reliability and other qualitative aspects of the fridges. We find that user experience, taken comprehensively, often provides the best data.

Best for most: A 36-inch French door fridge

The Whirlpool WRF535SMBM is one of the most affordable French door fridges at the most popular width (36 inches).

The Whirlpool WRF535SMBM‘s build is stripped-down but solid; it feels like it can turn in years of steady service without much fuss. (It’s a newish model, so long-term data is unavailable, but it gets superior short-term reliability reviews from owners.) With about 25 cubic feet of full-width, well-distributed capacity, it should hold enough food for a family of six with room left for drinks. Energy Star gives its efficiency a stamp of approval. Noise is not a common complaint among owners, either. No fancy features other than an ice maker in the freezer, but that means fewer parts or features that can break over time. And for what it’s worth, the stainless-look, French door design should keep your kitchen looking fresh and modern for years to come.

A four-door fridge from the future

We like the Samsung RF28HMEDBSR for its versatile center drawer, blue LEDs, and silver trim.

For a big kitchen with a big budget, we’d get the 36-inch-wide Samsung RF28HMEDBSR. That’s because we love the four-door look, which is a newly popular variant on the typical three-door French door design. We also love all its little design flourishes, like blue-tinted LEDs and shelves with silver trim. Sure, the center drawer’s "flex" temperature settings are a little gimmicky, but we’d turn it up to the warmest setting and keep our fancy beers in there. The ice maker is slow and somewhat prone to jamming, but we could live with that.

Do you need to pay this much for a good refrigerator? Hell no—we just like this one. If you’re going to use something every day for the next decade, you should get something you like. You should pick whatever suits your tastes, and you have plenty of great options. Our buying guide can point out the pros and cons with most of the currently popular refrigerator designs and features.

An affordable top freezer

Most top-freezer models are pretty similar, but the GE GTS18GTHWW is among the least likely to have reliability issues.

If you’re on a budget, we recommend the GE GTS18GTHWW. As a 30-inch-wide top freezer that costs less than $600, it is the most basic (in design and price), viable fridge that most people should consider. It has all the same features as similar models, and it’s less likely to have a factory defect or other reliability issues, based on what we’ve learned from user reviews. At 17.5 cubic feet, it holds enough food for a family of four. This is also a solid pick if you’re looking for a second fridge to keep in the basement or garage, or if you need to provide a refrigerator for tenants.

A French door model for 33-inch spaces

The Whirlpool WRF532SMBx is a slimmer 33-inch version of the Whirlpool we recommend above.

For 33-inch spaces, Whirlpool makes the WRF532SMBx (the last character is a "wild card" for different finishes), a model that’s nearly identical to the wider Whirlpool that’s our main pick. The same pros and cons that apply to the WRF535SMBM also apply to this model. One catch: 33-inch fridges aren’t as in demand as 36-inch models, so prices tend to be higher despite the refrigerators being smaller.

A sleek 30-inch (or slimmer) fridge

If your kitchen is small enough that you need a 30-inch fridge, the GE GTS18GTHWW we mentioned above as our affordable top-freezer pick is a good choice.

We looked for a 30-inch model with a more modern look and better features, but nothing at the right price jumped out at us. The French door fridges at this size all cost more than comparable 33- and 36-inch models, which makes the smaller models hard to justify buying. None of the bottom-freezer models felt like they were worth the $400 premium over our budget pick. And sure, you could "go in between" and get a top freezer with a stainless finish, but we’d rather you just save the money.

Of course, none of that should stop you from getting a refrigerator you like. If you’re working with a narrow space and want something more than a boring, cheap top freezer, our buying guide may help you find it.

This guide may have been updated by The Sweethome. To see the current recommendation, please see the latest full guide.

via Engadget
The best refrigerator

Clinton Versus Trump – Persuasion Scores

Today I’ll take you to the third dimension of persuasion to see how Clinton and Trump are matching up lately. I can’t make this post appear balanced because Clinton is making big mistakes on the persuasion dimension while Trump is being his usual skillful self. So the best I can do is remind you that my political preferences do not align with Trump or any other candidate.

We’ll start with Clinton’s new campaign slogan: 

LOVE TRUMPS HATE

Based on the slogan, I can tell you with confidence that the Clinton campaign doesn’t have anyone with a persuasion background helping with the big decisions. Here’s why:

1. Humans put greater cognitive weight on the first part of a sentence than the last part. This is a well-understood phenomenon. And the first part literally pairs LOVE and TRUMP. 

2. The slogan increases exposure to the name Trump. That’s never a good idea.

3. Spoken aloud, the slogan sounds like asking people to agree with Trump’s hate, as in “Love Trump’s hate (because Trump hates war, terrorism, and bad trade deals, same as you?). 

This is the sort of mistake you never see out of the Trump campaign. The slogan is pure amateur hour. It accomplishes the opposite of its intent, and you can’t fail harder than that.

Now let’s look at the “woman card” issue. Trump took the risky (but strategically solid) approach of taking the fight to Clinton’s strength – her appeal among women voters and among men who think it is time for a woman to be president. Trump branded her as a sexist who is hiding behind political correctness. It was a strong persuasion play and it put Clinton on the defensive.

Clinton responded by embracing and magnifying the accusation. She said that if fighting to make the world better for women is playing the “woman card” then you can “Deal me in!” The response was quick, clever, and catnip for her base.

You might remember Trump using a similar persuasion trick. Months ago, when Chris Cuomo asked Trump about the criticisms that he was a whiner, Trump embraced the whiner label, then amplified it by saying he was indeed the strongest voice for change. That’s exactly the right response. Clinton made the same play with “Deal me in!” So far, so good.

Then came the image of an actual “woman card” designed to capitalize on Clinton’s successful counterpunch. When something is working, you do more of it. But…maybe you should not do it…this way.

Let’s start with the fact that the design features a symbol from a restroom door. Just as the Clinton slogan unintentionally linked LOVE and TRUMP, the restroom symbol literally makes your brain associate Clinton with…a toilet.

You can’t make this up. When you saw that symbol, you thought of a restroom. it is automatic. 

But the biggest mistake was putting a magnetic strip on the Woman Card. That makes you think of a credit card. And that makes you think of debt. Or perhaps it makes you think of a transit card that Clinton had trouble using at the subway in New York. All bad.

You might ask yourself why the campaign did not go with a playing card model instead of a credit card. After all, “deal me in” is not typically associated with a magnetic strip. 

I’ll tell you why they didn’t use playing cards as their clever response. It’s because you would have to end up labelling Clinton the queen of – let’s say –hearts. And in cards, the queen is ranked below the king. That’s not so good if your opponent is a man…who lives in castles. 

When asked about the Woman Card issue, Clinton made an enormous error by saying she knows how to deal with men who go “off the reservation.” For starters, it is a racist reference. But the bigger issue is that it opened the door for Trump to say – as he has – that it is offensive to men and a sign that Clinton believes men need to be controlled, and kept on the “reservation” by…women.

Trump flipped the frame on her, as he does so well. The original frame for Clinton’s “reservation” comment was that Trump was the problem and Clinton has a lot of experience dealing with that personality type. Trump reframed the situation as if Clinton were saying that men in general need to be kept in line…by…women.

Historians will someday see Clinton’s “off the reservation” comment as one of the biggest mistakes in American politics. It might not play that way on the 2D level of politics where it seems little more than a bad choice of words. But it is far more.

At some point, expect Trump to remind the country that we have sons, too.

Prediction: On November 8th you will see a record number of men walking “off the reservation” to vote for Trump.

via Scott Adams Blog
Clinton Versus Trump – Persuasion Scores

Taking the MySQL document store for a spin

This is not a comprehensive review, nor an user guide. It’s a step-by-step account of my initial impressions while trying the new MySQL XProtocol and the document store capabilities. In fact, I am barely scratching the surface here: more articles will come as time allows.MySQL 5.7 has been GA for several months, as it was released in October 2015. Among the many features and improvements, I was surprised to see the MySQL team emphasizing the JSON data type. While it is an interesting feature per se, I failed to see the reason why so many articles and conference talks were focused around this single feature. Everything became clear when, with the release of MySQL 5.7.12, the MySQL team announced a new release model.OverviewIn MySQL 5.7.12, we get the usual MySQL server, which shouldn’t have new features. However, in an attempt to combine the stability of the server with a more dynamic release cycle, the server ships with a new plugin, unimaginatively named X-Plugin, which supports an alternative communication protocol, named X-Protocol.In short, the X-Protocol extends and replaces the traditional client/server protocol, by allowing asynchronous communication to the server, using different API calls, which are available, as of today, in Javascript, Python, C#, and Java, with more languages to come.The reason for this decision is easy to see. Many developers struggle with relational tables and SQL, while they understand structures made of arrays and associative arrays (maps.) This is also one of the reasons for the recent popularity of NoSQL databases, where schemas and tables are replaced by collections of documents or similar schema-less structures. With this new release, MySQL wants to offer the best of two worlds, by allowing developers to use the database with the tools they feel most comfortable with. To use the new plugin, you need two components: The plugin itself, which ships with the server package, but is not enabled by default;The MySQL shell, a new command line tool that you have to download and install separately and will allow you to use Javascript or Python with the database.As a QA professional, I am a bit concerned about this mix of GA and alpha features (The MySQL shell is defined as alpha software. and the shell itself says development preview in its help). Theoretically, the two worlds should be separated. If you don’t install the plugin, the server should work as usual. But practice and experience tell me that there are dangers waiting for a chance to destroy our data. If you want a single piece of advice to summarize this article, DON’T USE the new MySQL shell with a production server. That said, let’s start a quick tour.InstallationYou need to install the shell, which comes in a package that is different from the rest of MySQL products. The manual shows how to install it on OSX or Linux. The only mention that this product could be dangerous to use is a note reminding the user to enable the MySQL Preview Packages when installing from a Linux repository. The procedure, on any operating system, will install library and executables globally. Unlike the server package, it is not possible to install it in a user-defined directory, like you install the server with MySQL Sandbox. In this context, the standard Oracle disclaimer may have a meaning that goes beyond a regular CYA.Next, you need to enable the plugin. You can do it in three ways:(1)$ mysqlsh –classic -u youruser -p –dba enableXProtocolmysqlx: [Warning] Using a password on the command line interface can be insecure.Creating a Classic Session to youruser@localhost:3306Enter password:No default schema selected.enableXProtocol: Installing plugin mysqlx…enableXProtocol: done(2)Start the server with –plugin-load=mysqlx=mysqlx.so. This will enable the plugin, although it does not seem to work the way it should.(3)Enable the plugin with a SQL command. mysql> install plugin mysqlx soname ‘mysqlx.so’;I prefer method #3 because is the only one that does not have side effects or misunderstanding. The issue that hit me when I tried method #1 for the first time is that calling mysqlsh –classic uses the client/server protocol on port 3306 (or the port that you defined for the database) while subsequent calls will use the X-Protocol on port 33060.Alternatives. Using DockerIf what I said previously has made you cautious and you have decided not to use the shell in your main computer (as you should), there are alternative ways. If you have a data center at your disposal, just fire a virtual machine and play with it. However, be aware that the MySQL shell does not install in Ubuntu 15.04 and 16.04. A lightweight method to try on the new shell without endangering your production server is to use a Docker image for MySQL, or a combination of MySQL Sandbox and Docker.In Docker, the MySQL shell does not ship together with the server. It requires a separate image. A quick guide is available in a recent article. I don’t like the current approach: having two images is a waste of space. It would be acceptable if the images were based on a slim Linux distribution, such as Alpine. Since they run on OracleLinux, instead, you need to download two beefy images to start testing. With a fast internet connection this should not be a problem, but if you live in a place where 3 MBPS is the norm or if you are traveling, this could become an annoyance. Once you have pulled the images, you can use them at will, even without internet connection. The above mentioned quick guide suggests using docker run –link to connect the two containers. I recommend a different approach, as the link option is now considered legacy.$ docker network create mynetedcc36be21e54cdb91fdc91f2c320efabf62d36ab9d31b0142e901da7e3c84e9$ docker network lsNETWORK ID NAME DRIVERa64b55fb7c92 bridge bridge0b8a52002dfd none nullcc775ec7edab host hostedcc36be21e5 mynet bridge$ docker run –name mybox -e MYSQL_ROOT_PASSWORD=secret -d –net mynet mysql/mysql-server:5.7.12 \ –plugin-load=mysqlx=mysqlx.soecbfc322bb17ec0b1511ea7321c2b10f9c7b5091baee4240ab51b7bf77c1e424$ docker run -it –net mynet mysql/shell -u root -h mybox -pCreating an X Session to root@mybox:33060Enter password:No default schema selected.Welcome to MySQL Shell 1.0.3 Development PreviewCopyright (c) 2016, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or itsaffiliates. Other names may be trademarks of their respectiveowners.Type ‘\help’, ‘\h’ or ‘\?’ for help.Currently in JavaScript mode. Use \sql to switch to SQL mode and execute queries.mysql-js>The first command creates a network (called mynet).The second command creates the server container, which is launched using the network mynet and with the plugin-load option (which seems to work well with the docker image). When you use a docker network, the container name is recognized by the network as an host name, and can be called by other members of the network. This is much cleaner than using –link.The third command runs the MySQL shell, using the same network. This allows us to use the container name (mybox) without any other options.Running the MySQL Javascript shellMy favorite setup for this test is a mix of MySQL Sandbox for the server and Docker for the shell. This way I can use the alpha shell without polluting my Linux host and use a feature rich MySQL Sandbox to control the server.Here is what I do:$ make_sandbox 5.7.12 — –no_show -c general_log=1 -c general_log_file=general.logI start a sandbox with MySQL 5.7.12 (tarball expanded and renamed into /opt/mysql/5.7.12), with the general log enabled. We need this to peek under the hood when we use the document store.Next, we load the sample world_x database from the MySQL documentation page.$ ~/sandboxes/msb_5_7_12/use -e ‘source world_x.sql’Finally, we enable the plugin.$ ~/sandboxes/msb_5_7_12/use -e "install plugin mysqlx soname ‘mysqlx.so’"Now we can connect the shell:$ docker run -it –net host mysql/shell -u msandbox -pmsandbox world_xmysqlx: [Warning] Using a password on the command line interface can be insecure.Creating an X Session to msandbox@localhost:33060/world_xDefault schema `world_x` accessible through db.Welcome to MySQL Shell 1.0.3 Development PreviewCopyright (c) 2016, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or itsaffiliates. Other names may be trademarks of their respectiveowners.Type ‘\help’, ‘\h’ or ‘\?’ for help.Currently in JavaScript mode. Use \sql to switch to SQL mode and execute queries.mysql-js>What have we done? We use the network named ‘host’, which is a standard Docker protocol that lets a container use the host environment. We don’t need to specify a port, since the shell assumes 33060 (enabled by the X-Plugin). The username and password are the usual ones for a sandbox. We enter inside a Javascript shell, where we can communicate with the database server using an alternative syntax. Let’s see what we have:We have an "X-Session" using port 33060 and working on database world_x;There is a help, same as in the MySQL client;The database world_x is accessible through the variable db.Note: all the commands used below are the same for Python and Javascript. There are differences only when using the language extensively.With the above elements, we can try getting data from the database.mysql-js> db.collections{ "CountryInfo": <Collection:CountryInfo>}mysql-js> db.tables{ "City": <Table:City>, "Country": <Table:Country>, "CountryLanguage": <Table:CountryLanguage>}What does it mean? Let’s abandon the Javascript shell and look at the traditional client:mysql [localhost] {msandbox} (world_x) > show tables;+——————-+| Tables_in_world_x |+——————-+| City || Country || CountryInfo || CountryLanguage |+——————-+4 rows in set (0.00 sec)Here we see 4 tables, while the Javascript console lists only 3. However, the fourth table has the same name as the "collection." Let’s have a look:mysql [localhost] {msandbox} (world_x) > desc CountryInfo;+——-+————-+——+—–+———+——————+| Field | Type | Null | Key | Default | Extra |+——-+————-+——+—–+———+——————+| doc | json | YES | | NULL | || _id | varchar(32) | YES | | NULL | STORED GENERATED |+——-+————-+——+—–+———+——————+2 rows in set (0.00 sec)mysql [localhost] {msandbox} (world_x) > show create table CountryInfo\G*************************** 1. row *************************** Table: CountryInfoCreate Table: CREATE TABLE `CountryInfo` ( `doc` json DEFAULT NULL, `_id` varchar(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,’$._id’))) STORED) ENGINE=InnoDB DEFAULT CHARSET=utf81 row in set (0.00 sec)Look what we got! A JSON column with a dynamic index implemented as a virtual column. Now we can appreciate why the JSON data type was such an important thing.Back to the Javascript shell, let’s get something from the database. (You can get all the commands I am using, and much more, from the manual.)mysql-js> db.collections.CountryInfo.find("_id=’USA’")[ { "GNP": 8510700, "IndepYear": 1776, "Name": "United States", "_id": "USA", "demographics": { "LifeExpectancy": 77.0999984741211, "Population": 278357000 }, "geography": { "Continent": "North America", "Region": "North America", "SurfaceArea": 9363520 }, "government": { "GovernmentForm": "Federal Republic", "HeadOfState": "George W. Bush" } }]1 document in set (0.00 sec)Apart from the feeling of being back in the good old times when MySQL was still playing with IPO dreams (look at the HeadOfState field in the above data), this record is a straightforward JSON document, where data that should belong to different normalized tables are bundled together in this unified view. So, we are really querying a Table that contains JSON data associated with an _id. We know because the general log lists what happens after our simple query:SELECT doc FROM `world_x`.`CountryInfo` WHERE (`_id` = ‘USA’)Let’s try a more complex query. We want all countries in Oceania with a population of more than 150,000 people, and whose Head of State is Elisabeth II. The query is a bit intimidating, albeit eerily familiar:mysql-js> db.collections.CountryInfo.find("government.HeadOfState=’Elisabeth II’ AND geography.Continent = ‘Oceania’ AND demographics.Population > 150000").fields(["Name", "demographics.Population","geography.Continent"])[ { "Name": "Australia", "demographics.Population": 18886000, "geography.Continent": "Oceania" }, { "Name": "New Zealand", "demographics.Population": 3862000, "geography.Continent": "Oceania" }, { "Name": "Papua New Guinea", "demographics.Population": 4807000, "geography.Continent": "Oceania" }, { "Name": "Solomon Islands", "demographics.Population": 444000, "geography.Continent": "Oceania" }]4 documents in set (0.00 sec)Here is the corresponding SQL query recorder in the general log:SELECT JSON_OBJECT( ‘Name’, JSON_EXTRACT(doc,’$.Name’),’demographics.Population’, \ JSON_EXTRACT(doc,’$.demographics.Population’),’geography.Continent’, \ JSON_EXTRACT(doc,’$.geography.Continent’)) AS doc FROM `world_x`.`CountryInfo` \WHERE ( ((JSON_EXTRACT(doc,’$.government.HeadOfState’) = ‘Elisabeth II’) \ AND (JSON_EXTRACT(doc,’$.geography.Continent’) = ‘Oceania’)) \ AND (JSON_EXTRACT(doc,’$.demographics.Population’) > 150000) )I am not sure which one I prefer. The SQL looks strange, with all those JSON functions, while the Javascript command seems more readable (I had never thought I would say what I have just said!)Enough with reading data. I want to manipulate some. I’ll start by creating a new collection. mysql-js> db.createCollection(‘somethingNew’) <Collection:somethingNew>And the general log shows what should not be a surprise, as we have seen a similar structure for CountryInfo:CREATE TABLE `world_x`.`somethingNew` (doc JSON, \_id VARCHAR(32) \ GENERATED ALWAYS AS (JSON_UNQUOTE(JSON_EXTRACT(doc, ‘$._id’))) \ STORED NOT NULL UNIQUE) CHARSET utf8mb4 ENGINE=InnoDBNow, to the data manipulation:mysql-js> mynew=db.getCollection(‘somethingNew’)<Collection:somethingNew>The variable mynew can access the new collection. It’s a shortcut to avoid db.collections.somethingNewmysql-js> db.collections{ "CountryInfo": <Collection:CountryInfo>, "somethingNew": <Collection:somethingNew>}mysql-js> mynew.find()Empty set (0.00 sec)As expected, there is nothing inside the new collection. Now we enter a very minimal record.mysql-js> mynew.add({Name:’Joe’})Query OK, 1 item affected (0.01 sec)mysql-js> mynew.find()[ { "Name": "Joe", "_id": "e09ef177c50fe6110100b8aeed734276" }]1 document in set (0.00 sec)The collection contains more than what we have inserted. There is an apparently auto-generated _id field. Looking at the general log, we see that the data includes the new field.INSERT INTO `world_x`.`somethingNew` (doc) VALUES (‘{\"Name\":\"Joe\",\"_id\":\"e09ef177c50fe6110100b8aeed734276\"}’)As you can see, an _id field was added automatically. We could override that behavior by providing our own value:mysql-js> mynew.add({_id: "a dummy string", Name:"Frank", country: "UK"})The data inserted now includes the _id filed with our manual value. The general log says:INSERT INTO `world_x`.`somethingNew` (doc) VALUES (‘{\"Name\":\"Frank\",\"_id\":\"a dummy string\",\"country\":\"UK\"}’)The value of _id, however, must be unique, or the engine will generate an error:mysql-js> mynew.add({_id: "a dummy string", Name:"Sam", country: "USA"})MySQL Error (5116): Document contains a field value that is not unique but required to beIf all this gives you a sense of deja-vu, you’re right. This feels and smells a lot like MongoDB, and I am sure it isn’t a coincidence.Synchronizing operationsAs our last attempt for the day, we will see what happens when we manipulate data in SQL and then retrieve it in Javascript or Python.We leave the JS console open, and we do something in SQLmysql [localhost] {msandbox} (world_x) > drop table somethingNew;Query OK, 0 rows affected (0.01 sec)How does it look like on the other side?mysql-js> db.collections{ "CountryInfo": <Collection:CountryInfo>, "somethingNew": <Collection:somethingNew>}mysql-js> db.getCollections(){ "CountryInfo": <Collection:CountryInfo>, "somethingNew": <Collection:somethingNew>}Oops! mysqlsh didn’t get the memo! It still considers somethingNew to be available.mysql-js> db.collections.somethingNew.find()MySQL Error (1146): Table ‘world_x.somethingNew’ doesn’t existWe need to refresh the connection. Unlike the SQL client, you need to specify the connection parameters.mysql-js> \connect msandbox:msandbox@localhost:33060/world_xClosing old connection…Creating an X Session to msandbox@localhost:33060/world_xDefault schema `world_x` accessible through db.mysql-js> db.collections{ "CountryInfo": <Collection:CountryInfo>}We can see the same happening when we create a new table in SQL. The session in mysqlsh keeps showing the cached contents, and we need to refresh the session to see the changes. Looking at the general log, there are no changes when we issue commands asking for metadata, such as db.collections or db.tables. Instead, when we refresh the session, we see this:SELECT table_name, COUNT(table_name) c FROM information_schema.columns \ WHERE ((column_name = ‘doc’ and data_type = ‘json’) OR (column_name = ‘_id’ and generation_expression = ‘json_unquote(json_extract(`doc`,”$._id”))’)) \ AND table_schema = ‘world_x’ GROUP BY table_name HAVING c = 2SHOW FULL TABLES FROM `world_x`The first query lists all tables that contain a JSON document and a generated _id (these are the collections). The second one lists all tables. Then the shell removes from the table list all the ones that were in the collections list.Given the way it is done, we can cheat the system easily by creating something that looks like a collection, but has extra fields:CREATE TABLE strangedoc (doc JSON, \_id VARCHAR(32) \ GENERATED ALWAYS AS (JSON_UNQUOTE(JSON_EXTRACT(doc, ‘$._id’))) \ STORED NOT NULL UNIQUE, secret_stash varchar(200), more_secret_info mediumtext) CHARSET utf8mb4 ENGINE=InnoDB;mysql [localhost] {msandbox} (world_x) > insert into strangedoc (doc,secret_stash,more_secret_info) values \(‘{"_id": "abc", "name": "Susan"}’, \’and now for something completely different’, \’Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.’);Query OK, 1 row affected (0.00 sec)mysql [localhost] {msandbox} (world_x) > select * from strangedoc\G*************************** 1. row *************************** doc: {"_id": "abc", "name": "Susan"} _id: abc secret_stash: and now for something completely differentmore_secret_info: Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.1 row in set (0.00 sec)And the Javascript console will be unaware of the extra material:mysql-js> db.collections{ "CountryInfo": <Collection:CountryInfo>, "strangedoc": <Collection:strangedoc>}mysql-js> db.strangedoc.find()[ { "_id": "abc", "name": "Susan" }]1 document in set (0.00 sec)We can add contents to the collection in Javascript, and the database server won’t protest (provided that the extra fields are nullable or have a default value). Is it a bug or a feature?Parting thoughtsAs I have said at the beginning, this is a very simple exploration. More work is required to test the full potential of the new model. My impressions are mildly positive. On one hand, it’s an exciting environment, which promises to expand to better usefulness with more programming languages and possibly better coordination between shell and server software. On the other hand, there are many bugs, and the software is still very green. It will require more iterations from the community and the development team before it could be trusted with important data.
via Planet MySQL
Taking the MySQL document store for a spin

Jase Robertson of Duck Dynasty on Gun Violence and Gun Control (Video)

“It’s a problem. There’s raving lunatics out there with guns.” Those words are spoken by Duck Dynasty’s Jase Robertson in this short video about gun control and whether it will prevent crime. Here you are following the rules, you’re in class, trying to do what’s right, and some idiot comes in there with a gun[…..]

The post Jase Robertson of Duck Dynasty on Gun Violence and Gun Control (Video) appeared first on AllOutdoor.com.

via AllOutdoor.com
Jase Robertson of Duck Dynasty on Gun Violence and Gun Control (Video)

How To Offer More Personal Customer Support Through Effective Automation


  

Robots are great for cleaning the floor and are perfect for exploring the moon. They’re just not that great at customer support. The last thing your customers want is another “We received your message” email or “Thank you for holding” recording. Robots only succeed in making customers feel like another number, a dubious accomplishment for your team. They’re the opposite of the personal touch that effective support is supposed to be all about.

How To Offer More Personal Customer Support Through Effective Automation

It’s not that robots are useless. They’re great at repetitive tasks, perfect for finding data and remembering anything you’ve ever written down. As sidekicks, robots can help offer more personalized support, doing the tedious parts of support so that you can focus on actually solving problems. You just have to give them the right job. Here’s how to find the perfect job for your robots, so that you can automate support and offer more personalized, hands-on support at the same time.

The post How To Offer More Personal Customer Support Through Effective Automation appeared first on Smashing Magazine.

via Smashing Magazine
How To Offer More Personal Customer Support Through Effective Automation