Monthly Archives: July 2018

Best emulation for a future artificial intelligence system

President Trump offers a good emulation for a future artificial intelligence system, suggests a column I read earlier this month, and his presidency may be an early warning of what could happen if we should fail to think through its training and information sources.

Cathy O’Neil, the author of the piece, is a data scientist, mathematician and professor, so she has decent chops. She compares artificial intelligence to human intelligence that is mostly id — basically because we don’t yet know how to instill it with empathy, or create the digital equivalent of a conscience.

Given that IBM’s Watson was designed not to replace humans but to enhance them by giving them the critical information they need to make the best decisions, it could be a useful tool for training our new president. And it is built in the U.S. by a U.S. company.

Given that Watson is now doing our taxes, it could be huge both for the president and IBM. I’ll explain and then close with my product of the week: Nvidia’s new set-top box.

Id-Driven CEOs – a Model for Future AIs

CEOs in large companies, particularly those who can implement large layoffs and take massive salaries without remorse, are believed to have similar behavioral traits.

Donald Trump is a good showcase of what could happen with an AI that didn’t receive high quality information and training. Understanding this and designing to correct the problem could prevent a Skynet outcome.

Skynet — the computing system in the Terminator movies — was created for defense purposes to eliminate threats. When humans tried to shut it down, it concluded that humans were the biggest threat and that it needed to eliminate them.

Using reverse logic, if President Trump is a good emulation of a future AI, then the same thing that would ensure that the future AI wouldn’t kill us should work to turn the new president into one of the most successful who ever lived, from the perspective of those who live in the U.S.

The AI Dichotomy

There are two parallel and not mutually exclusive paths for the coming wave of artificially intelligent machines coming to market. One — arguably the most attractive to many CEOs that deal with unions — is the model in which the machine replaces the human, increasing productivity while lowering executive aggravation.

This is exemplified in an episode of The Twilight Zone, “The Brain Center at Whipple’s.” As you would expect, once you go down a path of replacement, it is hard to know when to stop. At the end of the episode, the enterprising CEO who so unfeeling dealt with the employees he’d laid off is replaced by my favorite robot, Robby.

The other path — the one IBM espouses — is one in which the artificial intelligence enhances the human employee. It is a cooperative arrangement, and Watson was designed specifically for this role.

In one of its first medical tests, Watson took just minutes to diagnose a rare form of cancer that had stumped doctors for months. The supercomputer’s analysis led to a new, more effective treatment for the patient.

It is interesting to note that autonomous cars are developing on a parallel path — but in this case, the opposite scenario is favored. In the model known as “chauffeur,” the car has no capability for human driving. This model is favored when tied to a service, such as Uber.

Steady Enterprise March

Enterprise IT decision makers have been exploring the potential of Internet of Things technologies, but they are not rushing IoT projects into development and are showing caution in their adoption commitments, according to survey results Red Hat released Wednesday.

Of the 215 participants in the company’s survey, “Enterprise IoT in 2017: Steady as she goes,” 55 percent indicated that IoT was important to their organization. However, only a quarter of those organizations actually were writing project code and deploying IoT technologies.

Enterprise interest in IoT has been deliberate and careful, Red Hat’s findings suggest.

Open source software is well positioned to be the dominant technology for IoT development, and open source partners will be vital to project success, the survey results indicate.

The latest survey was a follow-up to Red Hat’s 2015 survey on IoT interest in the enterprise. While it appears that interest in IoT is picking up, companies are approaching actual rollouts with the common enterprise IT theme of “steady deliberation.”

The aim of the 2015 survey was to find out if people were building IoT solutions from scratch or were leveraging pieces from other projects and adding an IoT component, said Lis Strenger, senior principal product marketing manager for Red Hat.

“Knowing that would help us decide what he had to add to our own product part. Two years later … we found that the hype cycle of IoT had quickly moved ahead very fast. It went out of hype more quickly than people expected it to,” she told LinuxInsider.

Survey Revelations

The survey was segmented and sought responses only from people fitting the developer and architect profile.

At 55 percent, the number of survey respondents who described IoT as important to their organization was up 12 percent from 2015.

Their IoT deployments were in the early stages, with fewer than a quarter of respondents actually designing, prototyping or coding an IoT project, Strenger pointed out.

Still, “more people are further along in active IoT projects. That was an important discovery,” she said.

About 22 percent of respondents were in active development — designing, prototyping or coding.

“This is a pretty significant chunk of our customer base,” Strenger noted.

Almost 60 percent of respondents were looking to IoT to drive new business opportunities, rather than to optimize existing investments or processes.

 

Unexpected Takeaway

One of the chief takeaways from the latest study is that devs viewing open source as the best approach to accommodate the need for rapid innovation, according to Strenger.

An impressive 89 percent of respondents said they were going to be using open source software.

Gigabit Wireless and the Anti iPhone Set

One of the biggest disappointments at this year’s Mobile World Congress, which opened Monday, is that the Samsung Galaxy 8 phone won’t make it. The phone’s official launch is scheduled for March 29.

The Galaxy line has been the ultimate iPhone fighter. Rumors around the anniversary edition of the iPhone suggest that it will do amazing, magical things, like 3D selfies. (OK, I’m really missing Steve Jobs at the moment — who the hell wants 3D selfies?!?)

Missing the biggest historical alternative is keeping a lot of us home this week. Still LG, Motorola, Lenovo and Qualcomm are expected to make huge announcements that could result in the iPhone 8 looking a tad out of date when it finally launches later in the year.

I’ll share some observations on what they have in store and close with my product of the week: a new PC camera from Logitech that enables Microsoft Hello on laptops and desktop PCs that otherwise wouldn’t support it. (When it works, Microsoft Hello is actually pretty cool.)

Gigabit Wireless

Some of this stuff we can anticipate just from Qualcomm launches. Perhaps the biggest of late is the Qualcomm X20 Modem. This part is likely to dominate the high-end phones announced at MWC and for good reason. It isn’t that it provides a maximum throughput of 1.2 gigabits — while impressive, that would just blow out our data plans — but that it uses carrier aggregation that increases overall data speeds by 2x or better.

This means you’ll have a far better chance of syncing your mail or downloading a book, movie or big file during the last minutes before the flight attendant forces you to put hour phone in airplane mode. It also means that cloud-based services likely will work much better on your phones, which will open up the door for things like…

 

Cloud-Based Artificial Intelligence

Let’s not kid ourselves — services like Siri suck. We’ve been waiting for some time for Apple’s partnership with IBM to result in a far better, Watson-like personal assistant. However, the richer the service, the less likelihood it can run on the phone, and the more it needs significant battery life.

If you really want a powerful artificial intelligence experience on the phone, you need both a powerful cloud-based AI and enough bandwidth to make the thing work, so expect some interesting, and far more powerful, cloud-based services announced this week.

Watson may be a stretch — though I doubt it — but the vastly improved Google Assistant is expected to be displayed on a far wider number of phones this year. So, one way or another, the new smartphones are likely to become a ton smarter.

 

LG Steps Into Samsung’s Space

With the Galaxy 8 delayed, LG is expected to step into Samsung’s space with a stunning new phone that is mostly hardened glass. I expect Corning, which makes Gorilla Glass, will be especially pleased.

This phone is expected to have mostly screen (tiny metal borders), the most advanced camera system to date, and a ton of performance-based features, and it could well be the phone to lust after. Leaked images suggest it may be one of the most beautiful phones ever created. Apple will not be pleased

 

BlackBerry’s Move

BlackBerry is expected to showcase its Project Mercury at the show (the company teased it at CES this year). It’s the last BlackBerry-designed phone, and the company is going out with a bang.

I’ve seen pictures of it floated on the Web, and it appears to be the best blend of a keyboard and screen phone yet. As BlackBerry phones have been for some time, it is Android-based, but it’s hardened and surprisingly pretty.

Linux Begins

Once you have a sense of the vast potential of Linux, you may be eager to experience it for yourself. Considering the complexity of modern operating systems, though, it can be hard to know where to start.

As with many things, computers can be better understood through a breakdown of their evolution and operation. The terminal is not only where computers began, but also where their real power still resides. I’ll provide here a brief introduction to the terminal, how it works, and how you can explore further on your own.

Although “terminal,” “command line,” and “shell” are often used interchangeably, it helps to learn the general distinctions between these terms. The word “terminal” comes from the old days of Unix — the architecture on which Linux is based — when university campuses and research facilities had a room-sized computer, and users interacted with it by accessing keyboard-and-screen terminals scattered around the campus and connected to the central hub with long cables.

Today, most of us don’t deal with true terminals like those. Instead, we access emulators — interfaces on Unix-like systems that mimic the terminal’s control mechanism. The kind of terminal emulator you’re most likely to see is called a “pseudo-terminal.”

Also called a “terminal window,” a pseudo-terminal is an operating system application on your normal graphical desktop session. It opens a window allowing interaction with the shell. An example of this is the Gnome Terminal or KDE Konsole. For the purpose of this guide, I’ll use “terminal” to refer exclusively to terminal emulators.

The “command line” is simply the type of control interface that one utilizes on the terminal, named for the fact that you write lines of text which are interpreted as commands.

The “shell” is the program the command line uses to understand and execute your commands. The common default shell on Linux is Bash, but there are others, such as Zsh and the traditional Unix C shell.

 

File Organization

The last thing you need to know before diving in is how files are organized. In Unix-like systems, directories are ordered in an upside down tree, with the root filesystem (notated as “/” and different from the “/root” directory) as the starting point.

The root filesystem contains a number of directories within it, which have their own respective directories and files, and so on, eventually extending to encompass every file your computer can access. The directories directly within the root filesystem, in directory notation, are given right after the “/”.

For example, the “bin” directory contained right inside the root would be addressed as “/bin”. All directories at subsequent levels down are separated with a “/”, so the “bin” directory within the “usr” directory in the root filesystem would be denoted as “/usr/bin”. Furthermore, a file called “bash” (the shell), which is in “bin” in “usr” would be listed as “/usr/bin/bash”.

So how do you find these directories and files and do stuff with them? By using commands to navigate.

To figure out where you are, you can run “pwd” (“print working directory”) and you will get the full path to the directory you’re currently in.

To see where you can go, run “ls” to list directory contents. When run by itself, it returns the contents of the current directory, but if you put a space after it and then a path to a directory, it will print the contents of the directory at the end of the path.

Using “ls” can tell you more than that, though. If you insert “-l” between the command and the path with a single space on either side, you will get the “long” listing specifying the file owner, size and more.

 

Commands, Options, Arguments

This is a good time to explain the distinction between commands, options and arguments. The command, which is the program being run, goes first.

After that you can alter the functionality of the command by adding options, which are either one dash and one letter (“-a”) or two dashes and a word (“–all”).

The argument — the thing the command operates on — takes the form of a path. Many commands do not need arguments to provide basic information, but some lend far greater functionality with them, or outright require them.