Planning a meeting with me used to take hours. First you must bathe yourself in the ritual Spring of Understanding and then, once completely clean, you must enter the room of Writing Down The Appointment. Once that was complete, I required all those requesting a meeting to complete the three trials including the Making of the Hoagie and the Understanding (And Explaining) Of Django Programming. Ultimately few passed my tests.
Now, however, I just use Vyte. Vyte lets folks visit your private page – like mine – and select a date to meet. You can approve it, set a location, and even decline it. It’s much like competing services like Calendly but I particularly like the ease of use and design.
Founded by French techies Martin Saint-Macary and Philippe Hong the company is self-funded and just starting out. They have 200 paid companies and 6,000 monthly active users.
“I met Philippe at a startup competition a few years ago,” said Saint-Macary. “After enjoying working together on some side projects, we co-founded Vyte together. We started tackling the group scheduling issue, and later realised that oddly enough, scheduling 1-on-1s was a much bigger pain at work, so we refocused on that.”
The system syncs with your Google Calendar and the mobile app acts as its own calendar app, allowing you to replace your default one.
Sadly Vyte does not allow me to force those who wish and audience with me to complete the arduous task of Putting The Fitted Sheet Down The Right Way The First Time it does make it easier for me to pass the buck and say “Hey, click on this and pick a time and I’ll tell you ‘No.’” Thus, as they say, the great world spins.
Twitter recently introduced an updated privacy policy announcing changes to how they collect user data and deliver advertising into your timeline. So what does the update mean and what should you do about it?
If you haven’t logged in to Twitter since the changes were announced, you’ll see this message:
Read on to understand what those changes are, and then you can click Review settings to make the necessary changes.
If you’ve already dismissed the message, you can access these settings again by going to Settings > Privacy and Safety. Scroll down to Personalization and Data and click Edit. This will take you to your Personalization and Data page.
New Personalization and Data Sharing Settings
Going to your Personalization and Data settingsallows you to see (and adjust) how Twitter collects and shares your data. You can selectively enable or disable these personalization settings:
Personalized ads — If enabled, you will see interest-based ads on and off Twitter.
Personalization based on apps — Personalized ads and content based on the list of apps you have installed on your mobile devices. (Twitter can’t view data inside the apps.)
Personalization across devices — If enabled, Twitter can serve up ads, content, and users to follow in the mobile app based on sites you’ve visited on your laptop and vice versa. Twitter gives an example:
“If you visit websites with sports content on your laptop, you can use this setting to help control whether we show you sports-related ads on Twitter for Android or iOS.”
Personalization based on the places you’ve been — Personalized ads and content based on your current or previous locations.
You can selectively enable or disable the following data settings:
Track where you see Twitter content across the web — Personalized ads, content, and users to follow based on websites you visit that include Twitter content, like embedded tweets or tweet buttons. Your web browsing history will be stored by Twitter, but will not be associated with your username, name, email, or phone number. Twitter will store this information for 30 days (compared to 10 days perviously) after which it will be deleted, aggregated, or obfuscated. Twitter users in the European Union and EFTA states (Iceland, Liechtenstein, Norway and Switzerland) are automatically exempted from this. Twitter gives the following example:
“If you regularly visit birdwatching websites, we might suggest accounts that frequently Tweet about that topic, or show you ads for binoculars or birdfeeders.”
Share data through select partnerships — Twitter is also asking permission to share data with partners. Twitter describes the data vaguely as “non-personal, aggregated, and device-level data.” Twitter doesn’t specify the partners, but does say that personal data you consent to share will not include your name, email, or phone number.
What You Should Do
While Twitter has earned criticism of privacy activists with these changes, and rightfully so since you’re automatically opted in to most of these settings, you can easily disable all personalization and access to your data.
If Twitter already has a list of the apps on your mobile devices, the list should be removed when you disable the feature.
In addition to being able to disable all of these settings, Twitter has also made it a little easier for you to see your Twitter data that is of interest to advertisers, and which advertisers have included you in their tailored audience lists on Twitter.
There are several things you can do with this data:
You can request a list of the advertisers and that list will be emailed to you when it’s ready.
You can adjust the interests Twitter associates with your account based on your profile and activity.
What do you think of Twitter’s new privacy and personalization settings? Are you comfortable sharing that data with advertisers? Do you think Twitter has done the right thing by giving users access to the settings or do you think they’re collecting too much information from their users? Let us know in the comments.
Outreach, a software developer selling services that gives sales forces a needed prompt to more efficiently use their time and optimize sales, along with an organizational tool to manage their pitching process, has just raised $30 million with that very pitch.
The company touts its ability to triple the volume of meetings and increase the sales pipeline for front-line sales representatives.
Outreach works its magic by collecting data from a variety of sources including email and customer relationship management tools. It also automatically repopulates information back into existing customer relationship management systems.
Customers should think of Outreach as a layer of automation on top of the existing customer relationship management stack, according to Manny Medina, the company’s chief executive.
Indeed, writing in TechCrunch last year, contributor Karan Mahendru, a partner at Trinity Ventures agreed:
All of that said, the missing piece in this movement is sales automation. This will be a huge area of activity and acquisitions in the next few years as everyone tries to be the home screen for sales.
Companies like our new portfolio company, Outreach, and companies like SalesLoft and ToutApp are building the systems of action necessary to codify and apply essential productivity learnings and workflow solutions.
The sales tech stack is being built as we speak, and it’s happening in lockstep with the move from the one-to-many work of demand generation, to the one-to-one world of account-based sales and marketing. These trends are paving the way for the next generation platforms of engagement. For the first time, these tools and technologies are giving visibility into the “how” of sales, not just the “how much” which is both exciting and necessary.
In the end, the goal is to build software that actually helps salespeople close deals, not just add another reporting layer for management. We want software that allows sales professionals to be the best version of themselves.
The new financing was led by DFJ Growth, while previous investors Mayfield, MHS Capital, Microsoft Ventures and Trinity Ventures participated in the funding. Four Rivers Group, another new investor, also came aboard.
To date Outreach has raised $60 million in venture funding.
Featured Image: Bert Hardy Advertising Archive/Getty Images
As Judah Friedlander himself teased on the Disrupt NY stage, today we present to the world: Judah Vs. The Machines.
Judah Vs. The Machines is an eight-episode web series that follows comedian Judah Friedlander as he takes on the world’s most sophisticated artificial intelligence systems to see who truly reigns supreme.
The series was produced by The Onion, in collaboration with TechCrunch, and is an excellent combination of informative content and hilarity.
You can check out all eight episodes below and enjoy a little binge view.
Episode 1: Judah Vs. Dog Breed Recognition Robot
Judah visits Facebook, an up-and-coming website. He learns how AI works on the platform and how difficult it is to translate emojis (emojii?). And he challenges its dog breed recognition software to see if man is still dog’s best friend.
Episode 2: Judah Vs. Art Robot
Judah meets Alex Reben, an artist and roboticist who has created a painting robot. He meets some of Alex’s weirder creations, such as a robot that intentionally harms humans. And he challenges the art-bot to an art-off in the final challenge.
Episode 3: Judah Vs. Soccer Playing Robots
Judah challenges RoboCanes, soccer playing robots from the University of Miami. He learns how well robots can work together and how bad they are at standing on two feet. And he challenges them to a match with the future of humanity on the line.
Episode 4: Judah Vs. Shopping App Thing
Judah meets Operator, an app that uses AI technology somehow. He pieces together some facts about the company, such as why their phone booths have no phones. And he acts as personal shopper in the challenge with Operator.
Episode 5: Judah Vs. Self-Driving Toy Cars
Judah challenges Anki, a self-driving toy car. He learns what it takes to be a robot and who to blame if things go wrong. And he faces the jeers of Anki’s creators in the final showdown.
Episode 6: Judah Vs. Hotel Delivery Robot
Judah meets Relay, a robot that delivers stuff to hotel rooms. He challenges this adorable robot to a game of customer satisfaction. And he answers the question: Can a man and a robot fall in love?
Episode 7: Judah Vs. Emotion-Identifying Robot
Judah visits Kairos, a company that creates AI to recognize human emotional states. He takes a grand tour of their one-room campus and faces off against their algorithms to see who can better detect emotions.
Episode 8: Judah Vs. Tedious-Tasks Robot
Judah meets BRETT, the Berkeley Robot for the Elimination of Tedious Tasks. He posits his own theory for what BRETT represents. And he challenges the robot to a toddler’s game in the thrilling conclusion.
The web has brought scientific knowledge at our fingertips. But how do you make sense of it all? How can you even begin to understand complex papers without the requisite education and training?
Call upon the aid of artificial intelligence and meet Iris.AI. It’s a tool that gives you a shortcut to all the science that’s out there on the web. It acts as a science assistant and makes sense of any openly available scientific paper you come across.
What Is Iris.AI All About?
The goal of Iris.AI is to make scientific research simpler for people in all walks of life. You could be a PhD student or an entrepreneur, but either way there’s no way you can trawl through the millions of open-access scientific papers published every year and make sense of the research within them.
You might also fail to connect the concepts in one paper to other concepts in others. Even the smartest and best-equipped human brain just isn’t powerful enough.
You can run the tool in two ways:
Paste the URL of a TED Talk.
Paste the URL of a scientific paper.
Iris makes research a lot simpler. It identifies key points in the abstracts of a paper. The algorithm then organizes concept maps with the key points. And finally, it gives you access to the most relevant research papers which it groups by the concepts.
The concept map is a visualization which you as the researcher can use to get a bird’s eye view of the topic. And then, you can dive deep into it browsing to the most relevant among the 66 million open access research papers that Iris can crawl through.
Boost Your Search for Connected Ideas
Iris is not limited to one specific scientific field unlike many other AI technologies. It is young and is still being developed but the usefulness of the tool is clear. The tool can be a search engine for connected ideas and help even a non-scientist make sense of the data and perhaps use it in an innovation. The right kind of artificial intelligence can save you time 3 Magical Ways Artificial Intelligence Can Save Your Time
When we set out to build MeetSpace1 (a video conferencing app for distributed teams), we had a familiar decision to make: What’s our tech stack going to be? We gathered our requirements, reviewed our team’s skillset and ultimately decided to use vanilla JavaScript and to avoid a front-end framework.
Using this approach, we were able to create an incredibly fast and light web application that is also less work to maintain over time. The average page load on MeetSpace152 has just 1 uncached request and is 2 KB to download, and the page is ready within 200 milliseconds. Let’s take a look at what went into this decision and how we achieved these results.
The most important requirement for the entire business was to build a better video conferencing tool. Many video conferencing solutions are out there, and they all suffer from really similar problems: reliability, connectivity, call quality, speed and ease of use. From our initial surveys, it was very clear that the most important problem for us to solve was to have flawless audio with low delay, high fidelity and high reliability.
We also found that the vast majority of our users (95% of those surveyed) were happy to use Chrome if it “delivered a superior video conferencing experience.” This meant we could use cutting-edge WebRTC technology to achieve our goals without having to launch on multiple native platforms, which would have been significantly more work. We decided to target Chrome and Firefox because of their WebRTC support, and we have our eye on Safari and Edge for the future.
According to our technical experiments, the best way to achieve high-quality reliable audio is to keep the app’s CPU and the network usage very low. This way, whenever traffic spikes on the network, there’s room to spare and the call doesn’t drop or stutter. This meant we needed to make our application very lightweight and very fast, so that it doesn’t take up any more of the CPU or network than necessary, and so that it is very fast to reload if things go wrong.
This was a big point in favor of using vanilla JavaScript and not a large framework. We needed to have a lot of control over the weight of the application, as well as its boot speed and idle CPU usage (changing elements, repaints, etc). This is a particularly low-level requirement that not many applications share. Many app vendors today aim to be better via superior UI and UX, more features and greater ease of use.
In our case, we needed 100% control over the UX of the video call to ensure that it was as light as possible and that we could maximally use whatever resources we had available to give priority to the call. As a bonus, this has the effect of not turning our users’ laptops into attack helicopters when their fans kick up to the max.
While I’d love to go into depth about how we did this with WebRTC, that’s a bit outside the scope of this article. However, I wrote an article about tuning WebRTC bandwidth by modifying the SDP payload7, which you can read if you’d like some details.
Next, we created rough designs for all of the pages we needed and took an inventory of the interactions we’d have on those pages. They ranged from small to medium interactions, with two pages being the most complicated. Let’s look at a few.
We had a lot of very small interactions. One example is a button to copy a link:
Here we have a simple <input> element with a “Copy” button that, when clicked, copies the link. For these kinds of interactions, we could use a very common style: a progressive enhancement pattern inspired by jQuery’s plugins. On page load, we scan for a particular selector representing the component and enhance it with click-to-copy functionality. It’s so simple that we can share the entire plugin here:
// Copy component
(function() {
window.addEventListener("load", function() {
var els = document.querySelectorAll("[data-copy]");
for(var i = 0; i < els.length; i++) {
var el = els[i];
el.addEventListener("submit", function(event) {
event.preventDefault();
var text = event.target.querySelector('input[type="text"]').select();
document.execCommand("copy");
});
}
});
}());
// Select all component
(function() {
window.addEventListener("load", function() {
var els = document.querySelectorAll("[data-click-select-all]");
for(var i = 0; i < els.length; i++) {
var el = els[i];
el.addEventListener("click", function(event) {
event.target.select();
});
}
});
}());
This component is actually made up of two different components: the copy component and the select-all component. The select-all component enhances an input by selecting the contents on click. This way, a user can click the input area, then hit Control/Command + C to copy on their own. Or, with the copy component, when the “Copy” button is clicked (triggering a form submission), we intercept the submission and copy the contents.
These interactions were so small that we decided that small vanilla components would be the simplest, fastest and clearest way to achieve the functionality. At the time of writing, we have about 15 of these small components throughout the app.
We identified two pages in the app with larger UX needs. This is where we weren’t sure if we wanted to use vanilla JavaScript, a small library or a large framework. First, we looked at our dashboard:
On this page, the main dynamic part is the portraits inside the rooms. We wanted to have live feedback on who is in each room, and so we use WebSockets to push participant information to all clients in the dashboard when someone joins or leaves a room. Then we have to add that person’s portrait inside the room. We decided that we’d be fine taking a simple approach here by pushing all participants down the socket for each room upon any change, then clearing out the room and rendering all participants fresh each time. The HTML was so simple that we didn’t need to use templates — just a few divs and an img tag. The most complicated part was some dynamic sizing. We knew at the beginning we could get away with vanilla JavaScript here.
WebSockets ended up being quite easy to implement without a framework. Here’s the basic scaffold for our WebSocket interactions:
We initialize the socket when we construct the Dashboard instance, with a URL to our endpoint. Our messages are all in the format {type: "type", data: { /* dynamic */ }}. So, when we receive a message, we parse the JSON and switch on the type. Then we call the appropriate method and pass in the data. When the socket closes, we attempt a reconnection after waiting a second, which keeps us connected if the user’s Internet stutters or if our servers have rebooted (for a deployment).
Next, we got to the largest page in the app, the video chat room:
Here we had numerous challenges:
people joining and leaving,
people muting and unmuting,
people turning the video on and off,
a custom “borderless” layout for varying numbers of participants (not possible with pure CSS),
WebRTC video capture and peer-to-peer streaming,
synchronization of all of the above across all clients through WebSockets.
For this page, we were really on the fence. We knew we’d be doing only a small amount of DOM manipulation (just adding simple elements for each participant), but the amount of WebSockets and WebRTC synchronization was a bit daunting.
We decided that we could handle the DOM and WebSocket events, but that we wanted some help on WebRTC, mainly because of cross-browser differences (WebRTC hasn’t fully settled down). We opted to use the official WebRTC adapter.js because we had a lot of confidence in a first-party solution (and it’s in use quite broadly). Additionally, it’s a shim, so we didn’t have to learn much to be able to use it, and its implementation is pretty simple (we knew we’d end up reading through a lot of it).
In addition to all the research we did ahead of time, we also had a hypothesis that we were very interested in testing: Could using less code (from others and including our own) and implementing from scratch result in less total work?
Our guess here was that the answer is yes when it comes to small to medium workloads. We knew that we had only a handful of pages and a handful of UX experiences, so we decided to take a risk and go without a lot of dependencies.
We’d all used (and enjoyed) many different frameworks in the past, but there’s always a tradeoff. When you use a framework, you have the following costs:
learning it,
customizing it to fit your needs (i.e. using it),
maintaining it over time (upgrades),
diving deep when there are bugs (you have to crack it open eventually).
Alternatively, working from scratch has the opposite costs:
building it (instead of learning and customizing),
refactoring (instead of customizing),
solving bugs the first time (which others have already found and fixed in their respective frameworks).
We guessed that, because we fell in the medium area of the spectrum, working from scratch would be less work in total — more work in the beginning (but less confusion and learning), a bit more of our own code in total, less code overall (if you count dependencies), easy-to-fix bugs (because we don’t have to dig deep) and generally much less maintenance.
MeetSpace has been around for a year now, and we have been surprisingly stable and reliable! I mean, I guess I shouldn’t say “surprised,” but honestly, I thought that coding it vanilla would cause more problems than it did. In fact, having full error traces and no frameworks to dig through makes debugging much easier, and less code overall means less problems. Another “fun” benefit is that when anything goes wrong, we immediately know it is our own fault. We don’t have to troubleshoot to determine where the bug is. It is definitely our code. And we fix it fast because we are quite familiar with our code.
If you look at our commit graph, you can see we had a lot of work at the beginning, but then over time we’ve had only large spikes coinciding with features:
We’ve never had any big refactorings or upgrades. The few times we did upgrade our libraries, nothing went wrong, because the updates were small and didn’t break the libraries’ APIs. It’s hard to compare with how much work it would have been had we used a framework, so the best we can do is to compare with past projects. My gut feeling is that, had we used a framework, more time would have been spent in total because of extra time spent learning the framework, tracking bugs through framework code and doing upgrades across framework versions. The best we can say about this project is that, over time, tracking down bugs and performing upgrades has been very little work.
Now, let’s get to the gory details: speed and size. For this section, I’ll talk about three pages: sign-in, dashboard and chat. Sign-in is the absolute lightest page, because it’s just a form and there’s no user context. Dashboard is our heaviest page that is mostly static. Chat is our heaviest page with the most action on it.
Bootstrap 4Foundation for Sites 6UIkit 3Current version, release date4.0.0-alpha 6, released January 20176.3.0, released January 20173.00 beta 9, released February 2017.
“Cold” refers to the initial visit, when the cache is empty; “warm” refers to subsequent visits, when cached files can be used.
All our JavaScript, CSS and images are fully cached, so after the first cold load, the only request and transfer being performed is the HTML of the page. There are zero other requests — no other libraries, templates, analytics, metrics, nothing. With every page clocking in at about 100 milliseconds from click to loaded, the website feels as fast as a single-page application, but without anywhere nearly as much complexity. We set our Cache-Control headers far into the future to fully cache our assets. Ilya Grigorik has written a great article on HTTP caching14.
The majority of the page load is for waiting on the server to process the request, render and send the HTML. And I’m sure you’re not surprised to hear that our back end is also almost entirely vanilla and frameworkless; built with Go, it’s got an average median response time of 8.7 milliseconds.
Please note: This article and the benchmarks here are for MeetSpace’s inner application, not our marketing website. So, if you go to MeetSpace’s website, you’ll have to click “Sign in” to reach the main application, where these improvements reside. The main www site has a different architecture.
For MeetSpace? Definitely. What about other projects? I think that all of the research we did ahead of time, our hypothesis on cost and our results point to a pretty simple tradeoff based on one broad metric: complexity.
The simpler, smaller and lighter an application is, the cheaper and easier it will be to write from scratch. As complexity and size (of your application and of your team and organization) grow, so does the likelihood that you’ll need a bigger base to build on. You’ll see stronger gains from using a standard framework across a large application, as well as from using the common language of that framework across a large team.
At the end of the day, your job as a developer is to make the tough tradeoff decisions, but that’s my favorite part!
Trump was in the news recently for possibly taping conversations in the oval office. But can you do that? Turns out the answer is kinda complicated. If you’re thinking of secretly recording a conversation with someone, you should probably read this first.
Whether you’re recording a phone call, an in-person conversation, or trying to record the conversations of others, it all comes down to consent and how the federal government, and each state’s individual laws, define that. You might want to capture your enemy’s true nature on tape for all to hear, but here’s the deal: it’s probably illegal.
What Federal Law Says
According to the Wiretap Act of 1968 (18 U.S.C. § 2511.), it’s illegal to secretly record any oral, telephonic, or electronic communication that is reasonably expected to be private. So, for example, recording a conversation with somebody in a bedroom, with the door shut, on private property, without them knowing is technically a federal crime in the loosest sense.
There are, however, a few exceptions to this law that create some sizable loopholes. The biggest being the “one-party consent” rule that says you can record people secretly if at least one person in the conversation consents to the recording, or if the person recording is authorized by law to do it (like police with a warrant). If we go back to our bedroom recording, that means you could record your conversation as long as one person—you—consents to it. Sneaky, eh? But here’s the catch: you have to actually be a part of that conversation. If you were simply recording two other people talking while standing nearby and not saying a word, you then have no consent from any of the parties, and thus it would be illegal.
State Laws Can Preempt Federal Law
Federal law does not always reign supreme when it comes to recording conversations in the U.S., though. Twelve states have “two-party (or all-party) consent” laws, meaning you cannot record conversations unless every single person in that conversation gives consent. Those states are:
Advertisement
Advertisement
California
Connecticut
Florida
Illinois
Maryland
Massachusetts
Michigan
Montana
Nevada
New Hampshire
Pennsylvania
Washington (not D.C.)
If we go back to the secret bedroom recording example, everyone in the room would need to consent to your recording if you were in one of the states listed above. But then it wouldn’t really be a secret recording anymore, would it?
While a state’s recording laws usually determine the legality of taping conversations, federal law takes precedence and preempts all state laws if it’s considered to be more protective of privacy. So even if a state did allow secret recordings without any consent, federal law would preempt that state’s laws.
Location, Location, Location
The other important aspect to consider is where you’re recording your conversation. The federal Wiretap Act promises a “reasonable expectation” of privacy, so there’s some wiggle room there. A closed-off bedroom in a private home is a reasonable place to expect privacy, so taping there can be risky, even with the power of one-party consent. If there was a party being thrown in that house, however, things could be a little different. Litigator Deborah C. Logan explains:
Sponsored
Whether one has a reasonable expectation of privacy in a given situation depends upon the context: Was the conversation in a public or private location? Did the individual being recorded treat the subject matter as private? A person who is bragging at a party about cheating a friend in a business deal cannot later object to the introduction of a recording of this admission as evidence in a lawsuit filed by his ex-friend.
As you can see, public locations open things up a tad. Secretly recording a conversation at a park or train station is perfectly legal if you’re in a one-party consent state and part of the conversation. But it’s still illegal in a two-party consent state.
Advertisement
Advertisement
And the definition of “safe places to record” changes on a state-by-state, case-by-case basis. Public places are almost always safe, but the definition of public place can get stretched sometimes. For example, a privately owned business office may seem like a private location, but some states, like Florida, do not “recognize an absolute right to privacy in a party’s office or place of business.” That doesn’t mean you should go secretly recording your mean boss, though, since it can still be illegal depending on where you are, what’s being said, and how it’s being said.
You also have to be careful about recording phone calls, especially if you’re talking with someone who’s in a state with different laws than yours. If you live in New York, a one-party state, and want to record a phone call with someone in California, a two-party state, you need to have their consent in addition to the consent you’ve automatically granted. If you use an app like Total Recall on Android or Tape a Call on iOS, you need to double-check that you’re not recording all calls by default and accidentally taping people illegally.
Audio and Video Aren’t the Same Thing, but Can Be Intertwined
Video recording law is different from audio recording law—and a topic for another time—but it’s important to know what those differences are. Generally speaking, you have the right to record video in all public spaces without need of consent. A public space is defined as anywhere any member of the public can legally access, so public transit facilities, parks, streets, etc. are all fair game. Recording video on private property, though, is up to the discretion of the property owner, private security, or police, but secret video recordings are illegal on all private property in some states, like California.
Advertisement
But here’s the most important part: recording video of a conversation in public might be legal, but recording audio along with that video is not if you’re in a two-party state. For example, recording a video of your heated conversation with a surly sales associate is illegal in all two-party states if they don’t give you permission to record them. Even in one-party states, recording video like that is dubious at best.
If you get busted secretly recording conversations, you could face jail time, fines, or even be sued. The federal Wiretap Act lists a possible sentence of five years in prison with a fine of $500. But that’s usually in addition to the state law’s being violated. Getting busted in California (Cal. Penal Code § 631.), for example, can net you another year in prison and a $2,500 fine. Also, most states let the non-consenting party who was recorded sue you for damages, which could be much worse than those other fines.
Check local laws first: Always know what your state’s recording laws are before you do anything, and double check laws if you’re recording calls from out of state. Do you need everyone’s consent? Or just yours? Where are you recording?
Know what consent looks like, and get it before you record: Consent is best when it’s verbal and part of your recording, but give a preemptive warning as well. Notify the other parties that you intend to record your interaction, wait to record until they agree, begin recording, then ask for permission again on tape.
Don’t be sneaky: I know, you’d probably love to catch a cheater red handed, or record your boss sexually harassing you, but those types of secret recordings can seriously backfire. More often than not, the recordings are usually deemed illegal and inadmissible in court, then you get busted for breaking the law and sued by the person you were hoping to take down.
It may be a hard pill to swallow, but secret recordings are rarely a good idea, whether you’re a president or a wannabe P.I. Get consent, don’t hide your camera, microphone, or recorder, and don’t try to goad people into revealing their deepest, darkest secrets without them knowing they’re on tape or you’re going to make things worse for yourself.
The slides and videos from the Percona Live Open Source Database Conference 2017 are available for viewing and download. The videos and slides cover the keynotes, breakout sessions and MySQL and MongoDB 101 sessions.
To view slides, go to the Percona Live agenda, and select the talk you want slides for from the schedule, and click through to the talk web page. The slides are available below the talk description. There is also a page with all the slides that is searchable by topic, talk title, speaker, company or keywords.
To view videos, go to the Percona Live 2017 video page. The available videos are searchable by topic, talk title, speaker, company or keywords.
There are a few slides and videos outstanding due to unforeseen circumstances. However, we will upload those as they become available.
Some examples of videos and slide decks from the Percona Live conference:
MySQL 101: Choosing a MySQL High Availability Solution Marcos Albe, Principal Technical Services Engineer, Percona
Video: http://ift.tt/2qpt4ph Slides: http://ift.tt/2lNckTu
Breakout Session: Using the MySQL Document Store Mike Zinner, Sr. Software Development Director and Alfredo Kojima, Sr. Software Development Manager, Oracle
Video: http://ift.tt/2qpfv9i Slides: http://ift.tt/2p69bmR
Keynote: Continuent is Back! But What Does Continuent Do Anyway? Eero Teerikorpi, Founder and CEO and MC Brown, VP Products, Continuent
Video: http://ift.tt/2qpfC4G Slides: http://ift.tt/2nQOmfg
Please let us know if you have any issues. Enjoy the videos!
We have developed multiple sponsorship options to allow participation at a level that best meets your partnering needs. Our goal is to create a significant opportunity for our partners to interact with Percona customers, other partners and community members. Sponsorship opportunities are available for Percona Live Europe 2017.
The slides and videos from the Percona Live Open Source Database Conference 2017 are available for viewing and download. The videos and slides cover the keynotes, breakout sessions and MySQL and MongoDB 101 sessions.
To view slides, go to the Percona Live agenda, and select the talk you want slides for from the schedule, and click through to the talk web page. The slides are available below the talk description. There is also a page with all the slides that is searchable by topic, talk title, speaker, company or keywords.
To view videos, go to the Percona Live 2017 video page. The available videos are searchable by topic, talk title, speaker, company or keywords.
There are a few slides and videos outstanding due to unforeseen circumstances. However, we will upload those as they become available.
Some examples of videos and slide decks from the Percona Live conference:
MySQL 101: Choosing a MySQL High Availability Solution Marcos Albe, Principal Technical Services Engineer, Percona
Video: http://ift.tt/2qpt4ph Slides: http://ift.tt/2lNckTu
Breakout Session: Using the MySQL Document Store Mike Zinner, Sr. Software Development Director and Alfredo Kojima, Sr. Software Development Manager, Oracle
Video: http://ift.tt/2qpfv9i Slides: http://ift.tt/2p69bmR
Keynote: Continuent is Back! But What Does Continuent Do Anyway? Eero Teerikorpi, Founder and CEO and MC Brown, VP Products, Continuent
Video: http://ift.tt/2qpfC4G Slides: http://ift.tt/2nQOmfg
Please let us know if you have any issues. Enjoy the videos!
We have developed multiple sponsorship options to allow participation at a level that best meets your partnering needs. Our goal is to create a significant opportunity for our partners to interact with Percona customers, other partners and community members. Sponsorship opportunities are available for Percona Live Europe 2017.
FactGem, which is launching in our Disrupt New York Battlefield competition today, was born out of Megan Kvamme‘s frustration with trying to juggle hundreds of Excel spreadsheets — and the data in them — while she was working as an investment banker. When she tried to find a software product that would allow her to more easily analyze all of this data, she couldn’t find what she was looking for, so she started working on what would later become FactGem back in 2011.
“People said ‘no,’ that’s a hard problem. You can’t do that,” Kvamme recalled, and later added that what she wanted to build was essentially a Bloomberg terminal for data. Shortly after she started exploring the space, she met Clark Richey, now FactGem’s CTO, who has an extensive background in working with databases at MarkLogic and, as an intelligence contractor, worked with an array of three-letter agencies. “He was the first guy who was both smart enough and crazy enough to say, ‘hey, we deal with these problems in the intelligence world,’ ” Kvamme said.
In its current iteration, FactGem is essentially an integration service that allows companies to bring their various data sources together to easily define new data models. It consists of three tools: WhiteboarderR, a drag-and-drop tool for describing the model (just like you would on a whiteboard); UploadR for matching this model with your data; and DashboardR for — you guessed it — building dashboards that also allow their users to easily dig deeper into the data.
The core idea here is to allow anybody in a company to work with these tools without ever having to write a single line of code.
While the project started out using MarkLogic’s database, the team later went on to use Neo4j as its underlying graph database. That’s where FactGem’s so called “data fabric” comes in. It ties together all the incoming data — which can arrive in real time. As Richey noted, the company worked hard to keep its stack platform agnostic. The team contends that using its system, businesses will be able to find more “gems” in their data — that is, business insights they otherwise may have missed.
Users can work with FactGem’s own dashboard tool or export their data to Tableau. Over time, the team also plans to add more advanced analytics features, including support for R (though once the data goes to Tableau, you could always use that’s service’s R support, too).
“We learned some important things along the way,” Kvamme told me about the company’s experience so far. “What it comes down to is that we solved this impossible problem from a tech perspective, but what we’re really providing is a business solution.”
FactGem is currently working with a number of clients, including in the retail and financial services space, though the team notes that it is also in the process of setting up a number of proof-of-concept projects for new clients.
The Columbus, Ohio-based company is currently self-funded (or funded through revenue, as Kvamme put it) and has a staff of about a dozen people. The team says that it is open to raising money, but that it would have to be the right investor.