SAP License Fees Also Due For Indirect Users, Court Rules

SAP’s licensing fees "apply even to related applications that only offer users indirect visibility of SAP data," according to a Thursday ruling by a U.K. judge. Slashdot reader ahbond quotes Network World:
The consequences could be far-reaching for businesses that have integrated their customer-facing systems with an SAP database, potentially leaving them liable for license fees for every customer that accesses their online store. "If any SAP systems are being indirectly triggered, even if incidentally, and from anywhere in the world, then there are uncategorized and unpriced costs stacking up in the background," warned Robin Fry, a director at software licensing consultancy Cerno Professional Services, who has been following the case…
What’s in dispute was whether the SAP PI license fee alone is sufficient to allow Diageo’s sales staff and customers to access the SAP data store via the Salesforce apps, or whether, as SAP claims, those staff and customers had to be named as users and a corresponding license fee paid. On Thursday, the judge sided with SAP on that question.



Share on Google+

Read more of this story at Slashdot.

via Slashdot
SAP License Fees Also Due For Indirect Users, Court Rules

How to Epoxy Voids in Wood, Make Your Own Kitchen Knife, Pour a Concrete Coffee Table and More

Kitchen Knife

Jimmy DiResta needs a new kitchen knife. He could go buy one, or he could make one—from scratch. The way that he marks the centerline of the edge of the bar stock, and uses an out-of-square length of wood to grind the blade angle, is very clever:

Triple Tenon Joined Lumber Rack

Matthias Wandel engineers a very atypical and space-efficient structure for a lumber rack:

Push Stick Saw

Frank Howarth gets artistic with his push stick design:

"Goodbye Shop"

This is kind of a shocker! After all of the work April Wilkerson put into kitting out her shop and improving her home and property, she and her husband are selling the place. I did not see this coming.

Epoxied and Sandblasted Live Edge Slab Coffee Table

You might think creating a live edge table is just a matter of throwing the slab on the legs. Not so. Here Marc Spagnuolo shows us how to epoxy voids in the wood, use a sandblaster to clean up the live edge, and goes over in detail the crucial finishing process:

Finishing Experiments

Finishing seems like such a black art that I’m always glad to see people giving tips or doing experiments with it. While the first three minutes of this video is the Samurai Carpenter turning bowls, he then explains how he’s using epoxy resin (as Spagnuolo did above, to repair voids) and experimenting with some "Turbo Cure" and wax. At the end he announces he’s got a trio of new Japanese saws for sale on his site:

DIY Coffee Table with a Concrete Top

Ben of HomeMade Modern uses his plywood/reinforcing mesh/concrete technique to create a coffee table:


via Core77
How to Epoxy Voids in Wood, Make Your Own Kitchen Knife, Pour a Concrete Coffee Table and More

Why Your Knuckles Make That Popping Sound When You Crack Them

If you love making your knuckles and other joints pop, you might’ve heard that doing so is “bad for you” and that “you’ll get arthritis.” Short answer: we’re not sure. Long answer: this video from Vox gives you the lowdown on what’s actually happening in your joints.

That characteristic (and honestly, disturbing) popping sound you hear when you crack knuckles is from a gas-filled lubricant within your joints called synovial fluid. Popping your joints creates stretched out space between your bones and sucks the fluid into that space, the event of which is associated with that lovely sound.

As for whether it’s “bad,” well, the research is not clear. If you don’t feel pain when you crack them, then you probably don’t need to worry.

Here’s what happens to your knuckles when you crack them | Vox


via Lifehacker
Why Your Knuckles Make That Popping Sound When You Crack Them

Looking to become the “OS” for financial services OpenFin raises $15 million

Some of the biggest financial services firms in the world are coming together to back a small New York-based technology company called OpenFin with aspirations to become the “operating system” for financial services applications.

The company has just closed a new $15 million round from investors led by the banking giant J.P. Morgan, the venture firm Bain Capital Ventures, and NEX Euclid Opportunities — the investment fund affiliated with the publicly traded electronic trading platform, NEX Group plc.

Additional investors in the round included DRW Venture Capital, Nyca Partners, Pivot Investment partners, and select angel investors and financial industry execs.

The company now bills itself as an Android for capital markets, although “Docker” for capital markets may be better? Without getting too in-the-weeds (although maybe I already have), OpenFin was developed on top of Google’s Chromium project — an open source project (anyone can see the code) that’s the same code Google uses for its Chrome browser.

OpenFin has forked that project to develop its own layer for developing and distributing applications. The company bills itself as a way for companies developing applications for financial services and capital markets to operate effectively across the different programming environments in each big bank, marketplace, hedge fund, or money mover.

The company’s software is already used by 35 of the biggest banks, hedge funds, and trading platforms and is installed on over 100,000 desktops.

OpenFin bills itself as a more secure, fully integrated way for anyone using trading or communication tools to work in the financial services sector.

“There’re three main things that [OpenFin’s service] does,” says OpenFin chief executive Mazy Dar, a former executive with the Intercontinental Exchange. “It’s the conduit that gets the app onto the [system], it provides security, and is the unifying layer to allow apps to talk to each other.”

Since applications that run on top of OpenFin never access the underlying network within a financial services institution, Dar argues that deployments on top of OpenFin are far more secure for their users. And in the notoriously security-focused financial services sector, that’s a good thing.

Currently, it can take anywhere from 6 to 18 months to deliver an application or an update to a desktop inside a financial services firm, according to OpenFin. The applications not only have to be vetted by security, but they also have to be integrated or customized to the back-end of each company (and each company has a different, proprietary, back-end).

OpenFin pitches itself as a service that can allow different fin-tech focused apps to communicate effectively without accessing core networks or existing in silos.

Even better for banks, there’s no charge. The company makes its money on per-seat fees charged to the companies that make and distribute the applications that run on its service.

This business model is both a blessing and a curse. OpenFin works insofar as there are applications that want to run on OpenFin’s platform. So far the company has 50 apps that are distributed through its service from customers including J.P. Morgan, Citadel, Electronifie, REDI, Trumid, Greenkey, ICAP, OpenDoor, embonds, and Tullett Prebon.

What’s appealing about OpenFin isn’t just the company itself (which, because I’m a nerd, I think is pretty fascinating), but also the ability to extend the company’s thesis into other industries.

Taking the specialized OS approach means that other security-conscious industries (oil and gas, utilities, heavy industry) could create a buffer between applications and their underlying architectures and (correct me if I’m wrong, y’all) create a stronger defense against cyber threats.

Just throwing that out there.

via TechCrunch
Looking to become the “OS” for financial services OpenFin raises $15 million

Research the CFPB’s Massive Database of Customer Complaints While You Still Can

The Consumer Financial Protection Bureau (CFPB) is a great resource for consumers, but its days may be numbered. Take advantage of one of its best features while you still can: it has a massive database of detailed complaints from bank and credit union customers.

The database has given customers a platform to complain about their experiences with financial institutions and according to the CFPB’s website, here’s how it works:

Each week we send thousands of consumers’ complaints about financial products and services to companies for response. Those complaints are published here after the company responds or after 15 days, whichever comes first. By adding their voice, consumers help improve the financial marketplace.

Financial institutions don’t like this, of course, because they say there’s no guarantee the complaints are accurate (even though there is a field for the company’s response, and 97% of complaints get a response). Basically, if you’re a bank that takes advantage of customers, you probably don’t like this database, because it gives those customers a platform to call you out.

For a consumer, though, it’s a pretty valuable platform to see what potential issues you might have. Customer narratives, like the one below, help tip off the CFPB to illegal business practices, like the whole Wells Fargo fiasco last year. Thanks to the CFPB and their database, Wells Fargo put a stop to opening unauthorized accounts. Even better, they were forced to refund customers the fees they racked up in those unauthorized accounts.

With the database in danger of disappearing, it’s a good time to take advantage of it if you haven’t already. If you’re thinking of switching banks or credit unions, research the information while you still can. You can download the data, too.

Consumer Complaint Database | CFPB


via Lifehacker
Research the CFPB’s Massive Database of Customer Complaints While You Still Can

Linus Torvalds: Talk of Tech Innovation is Bullshit. Shut Up and Get the Work Done

Linus Torvalds believes the technology industry’s celebration of innovation is smug, self-congratulatory, and self-serving. From a report on The Register: The term of art he used was more blunt: "The innovation the industry talks about so much is bullshit," he said. "Anybody can innovate. Don’t do this big ‘think different’… screw that. It’s meaningless. Ninety-nine per cent of it is get the work done." In a deferential interview at the Open Source Leadership Summit in California on Wednesday, conducted by Jim Zemlin, executive director of the Linux Foundation, Torvalds discussed how he has managed the development of the Linux kernel and his attitude toward work. "All that hype is not where the real work is," said Torvalds. "The real work is in the details." Torvalds said he subscribes to the view that successful projects are 99 per cent perspiration, and one per cent innovation.



Share on Google+

Read more of this story at Slashdot.

via Slashdot
Linus Torvalds: Talk of Tech Innovation is Bullshit. Shut Up and Get the Work Done

This 3,000-Year-Old Bronze Age Sword Is Absolutely Incredible

Careful, now—that sword is 3,000 years old. (Image: GUARD Archaeology)

In what archaeologists are calling the “find of a lifetime,” a horde of Late Bronze Age weapons has been discovered at a Scottish construction site. Among the items found is a gold-decorated spearhead, and a 3,000-year-old bronze sword in remarkably good condition.

The artifacts were found during an archaeological evaluation on a field in Carnoustie, Scotland prior to the construction of two of soccer fields. The firm commissioned to do the work, GUARD Archaeology, says the hoard of ancient metalworks is a “rare and internationally significant discovery.” The items were found in a pit close to a Bronze Age Settlement currently being excavated by the archaeologists.

The gold-decorated spearhead. (Image: GUARD Archaeology)

The spearhead was found next to a bronze sword, a pin, and sheath fittings. All items, which are dated to around 3,000 years old, are archaeologically significant, but the presence of the gold-decorated spearhead is exceptional.

“The earliest Celtic myths often highlight the reflectivity and brilliance of heroic weapons,” explained Blair in an interview with the BBC. “Gold decoration was probably added to this bronze spearhead to exalt it both through the material’s rarity and its visual impact.”

The 3,000-year-old bronze sword alongside the remnants of a sheath. (Image: GUARD Archaeology)

Other exceptional finds include well-preserved organic remains, for instance, a leather and wooden sheath that enveloped the sword. It’s considered the best preserved Late Bronze Age sheath ever found in Britain. The archaeologists also found fur skin wrapped around the spearhead, and textile around the pin and sheath. Organic items like this rarely survive for so long in the ground.

Based on the archaeological evidence, it appears that humans lived on this particular spot for an exceptionally long time. The excavation revealed the largest Neolithic hall so far found in Scotland, a building dating to around 4,000 BC. This structure, write the researchers, “may have been as old to the people who buried the weapon hoard, as they are to us.”

Whoa. Let that sink in for a minute…

Archaeologists toiling away at the Carnoustie site. (Image: GUARD Archaeology)

Along with the weapons horde, the GUARD team has uncovered around 1,000 archaeological features in the area, including a dozen Bronze Age semi-circular houses, a pair of long Neolithic-era dwellings, and various broken pots and artifacts. It’s not clear if this site was occupied continuously for thousands of years, or if the settlements were separated in time by many centuries.

Regardless, it doesn’t appear that the local kiddies will be playing soccer on these ancient fields any time soon.

[GUARD Archaeology]

via Gizmodo
This 3,000-Year-Old Bronze Age Sword Is Absolutely Incredible

Sysadmin 101: Troubleshooting

I typically keep this blog strictly technical, keeping observations, opinions
and the like to a minimum. But this, and the next few posts will be about
basics and fundamentals for starting out in system administration/SRE/system engineer/sysops/devops-ops
(whatever you want to call yourself) roles more generally.
Bear with me!

“My web site is slow”

I just picked the type of issue for this article at random, this can be
applied to pretty much any sysadmin related troubleshooting.
It’s not about showing off the cleverest oneliners to find the most
information. It’s also not an exhaustive, step-by-step “flowchart” with the
word “profit” in the last box.
It’s about general approach, by means of a few examples.
The example scenarios are solely for illustrative purposes. They sometimes
have a basis in assumptions that doesn’t apply to all cases all of the time, and I’m
positive many readers will go “oh, but I think you will find…” at some point.
But that would be missing the point.

Having worked in support, or within a support organization for over a decade,
there is one thing that strikes me time and time again and that made me write
this;
The instinctive reaction many techs have when facing a problem, is
to start throwing potential solutions at it.

“My website is slow”

  • I’m going to try upping MaxClients/MaxRequestWorkers/worker_connections
  • I’m going to try to increase innodb_buffer_pool_size/effective_cache_size
  • I’m going to try to enable mod_gzip (true story, sadly)

“I saw this issue once, and then it was because X. So I’m going to try to fix X
again, it might work”
.

This wastes a lot of time, and leads you down a wild goose chase. In the dark. Wearing greased mittens.
InnoDB’s buffer pool may well be at 100% utilization, but that’s just because
there are remnants of a large one-off report someone ran a while back in there.
If there are no evictions, you’ve just wasted time.

Quick side-bar before we start

At this point, I should mention that while it’s equally applicable to many
roles, I’m writing this from a general support system adminstrator’s point of
view. In a mature, in-house organization or when working with larger, fully managed or
“enterprise” customers, you’ll typically have everything instrumented,
measured, graphed, thresheld (not even word) and alerted on. Then your approach
will often be rather different. We’re going in blind here.

If you don’t have that sort of thing at your disposal;

Clarify and First look

Establish what the issue actually is. “Slow” can take many forms. Is it time to
first byte? That’s a whole different class of problem from poor Javascript
loading and pulling down 15 MB of static assets on each page load.
Is it slow, or just slower than it usually is? Two very different plans of
attack!

Make sure you know what the issue reported/experienced actually is before you
go off and do something. Finding the source of the problem is often difficult
enough, without also having to find the problem itself.
That is the sysadmin equivalent of bringing a knife to a gunfight.

Low hanging fruit / gimmies

You are allowed to look for a few usual suspects when you first log in to a
suspect server. In fact, you should! I tend to fire off a smattering of commands
whenever I log in to a server to just very quickly check a few things; Are we
swapping (free/vmstat), are the disks busy (top/iostat/iotop), are we dropping
packets (netstat/proc/net/dev), is there an undue amount of connections in an
undue state (netstat), is something hogging the CPUs (top), is someone else on
this server (w/who), any eye-catching messages in syslog and dmesg?

There’s little point to carrying on if you have 2000 messages from your RAID
controller about how unhappy it is with its write-through cache.

This doesn’t have to take more than half a minute.
If nothing catches your eye – continue.

Reproduce

If there indeed is a problem somewhere, and there’s no low hanging fruit to be
found;

Take all steps you can to try and reproduce the problem. When you can
reproduce, you can observe. When you can observe, you can solve.
Ask the person reporting the issue what exact steps to take to reproduce the
issue if it isn’t already obvious or covered by the first section.

Now, for issues caused by solar flares and clients running exclusively on
OS/2, it’s not always feasible to reproduce. But your first port of call
should be to at least try!
In the very beginning, all you know is “X thinks their website is slow”. For
all you know at that point, they could be tethered to their GPRS mobile phone and
applying Windows updates. Delving any deeper than we already have at that
point is, again, a waste of time.

Attempt to reproduce!

Check the log!

It saddens me that I felt the need to include this. But I’ve seen escalations
that ended mere minutes after someone ran tail /var/log/..
Most *NIX tools these days
are pretty good at logging. Anything blatantly wrong will manifest itself quite
prominently in most application logs. Check it.

Narrow down

If there are no obvious issues, but you can reproduce the reported problem,
great.
So, you know the website is slow.
Now you’ve narrowed things down to: Browser rendering/bug, application
code, DNS infrastructure, router, firewall, NICs (all eight+ involved),
ethernet cables, load balancer, database, caching layer, session storage, web
server software, application server, RAM, CPU, RAID card, disks.
Add a smattering of other potential culprits depending on the set-up. It could
be the SAN, too. And don’t forget about the hardware WAF! And.. you get my
point.

If the issue is time-to-first-byte you’ll of course start applying known fixes
to the webserver, that’s the one responding slowly and what you know the most
about, right? Wrong!
You go back to trying to reproduce the issue. Only this time, you try to
eliminate as many potential sources of issues as possible.

You can eliminate the vast majority of potential culprits very
easily:
Can you reproduce the issue locally from the server(s)?
Congratulations, you’ve
just saved yourself having to try your fixes for BGP routing.
If you can’t, try from another machine on the same network.
If you can – at least you can move the firewall down your list of suspects, (but do keep
a suspicious eye on that switch!)

Are all connections slow? Just because the
server is a web server, doesn’t mean you shouldn’t try to reproduce with another
type of service. netcat is very useful in these scenarios
(but chances are your SSH connection would have been lagging
this whole time, as a clue)! If that’s also slow, you at least know you’ve
most likely got a networking problem and can disregard the entire web
stack and all its components. Start from the top again with this knowledge
(do not collect $200).
Work your way from the inside-out!

Even if you can reproduce locally – there’s still a whole lot of “stuff”
left. Let’s remove a few more variables.
Can you reproduce it with a flat-file? If i_am_a_1kb_file.html is slow,
you know it’s not your DB, caching layer or anything beyond the OS and the webserver
itself.
Can you reproduce with an interpreted/executed
hello_world.(py|php|js|rb..) file?
If you can, you’ve narrowed things down considerably, and you can focus on
just a handful of things.
If hello_world is served instantly, you’ve still learned a lot! You know
there aren’t any blatant resource constraints, any full queues or stuck
IPC calls anywhere. So it’s something the application is doing or
something it’s communicating with.

Are all pages slow? Or just the ones loading the “Live scores feed” from a
third party?

What this boils down to is; What’s the smallest amount of “stuff” that you
can involve, and still reproduce the issue?

Our example is a slow web site, but this is equally applicable to almost
any issue. Mail delivery?
Can you deliver locally? To yourself? To <common provider here>? Test
with small, plaintext messages. Work your way up to the 2MB campaign
blast. STARTTLS and no STARTTLS.
Work your way from the inside-out.

Each one of these steps takes mere seconds each, far quicker than
implementing most “potential” fixes.

Observe / isolate

By now, you may already have stumbled across the problem by virtue of being unable to
reproduce when you removed a particular component.

But if you haven’t, or you still don’t know why;
Once you’ve found a way to reproduce the issue with the smallest amount of
“stuff” (technical term) between you and the issue, it’s time to start
isolating and observing.

Bear in mind that many services can be ran in the foreground, and/or have
debugging enabled. For certain classes of issues, it is often hugely helpful to do this.

Here’s also where your traditional armory comes into play. strace, lsof, netstat,
GDB, iotop, valgrind, language profilers (cProfile, xdebug, ruby-prof…).
Those types of tools.

Once you’ve come this far, you rarely end up having to break out profilers or
debugers though.

strace is often a very good place to start.
You might notice that the application is stuck on a particular read() call
on a socket file descriptor connected to port 3306 somewhere. You’ll know
what to do.
Move on to MySQL and start from the top again. Low hanging
fruit: “Waiting_for * lock”, deadlocks, max_connections.. Move on to: All
queries? Only writes? Only certain tables? Only certain storage
engines?…

You might notice that there’s a connect() to an external API resource that
takes five seconds to complete, or even times out. You’ll know what to do.

You might notice that there are 1000 calls to fstat() and open() on the
same couple of files as part of a circular dependency somewhere. You’ll
know what to do.

It might not be any of those particular things, but I promise you, you’ll
notice something.

If you’re only going to take one thing from this section, let it be; learn
to use strace! Really learn it, read the whole man page. Don’t even skip
the HISTORY section. man each syscall you don’t already know what it
does. 98% of troubleshooting sessions ends with strace.

via Planet MySQL
Sysadmin 101: Troubleshooting