Amazon.com Inc. is developing a voice-activated wearable device that can recognize human emotions.
The wrist-worn gadget is described as a health and wellness product in internal documents reviewed by Bloomberg. It’s a collaboration between Lab126, the hardware development group behind Amazon’s Fire phone and Echo smart speaker, and the Alexa voice software team.
Designed to work with a smartphone app, the device has microphones paired with software that can discern the wearer’s emotional state from the sound of his or her voice, according to the documents and a person familiar with the program. Eventually the technology could be able to advise the wearer how to interact more effectively with others, the documents show.
It’s unclear how far along the project is, or if it will ever become a commercial device. Amazon gives teams wide latitude to experiment with products, some of which will never come to market. Work on the project, code-named Dylan, was ongoing recently, according to the documents and the person, who requested anonymity to discuss an internal matter. A beta testing program is underway, this person said, though it’s unclear whether the trial includes prototype hardware, the emotion-detecting software or both. Amazon declined to comment.
The notion of building machines that can understand human emotions has long been a staple of science fiction, from stories by Isaac Asimov to Star Trek’s android Data. Amid advances in machine learning and voice and image recognition, the concept has recently marched toward reality. Companies including Microsoft Corp., Alphabet Inc.’s Google and IBM Corp., among a host of other firms, are developing technologies designed to derive emotional states from images, audio data and other inputs. Amazon has discussed publicly its desire to build a more lifelike voice assistant.
The technology could help the company gain insights for potential health products or be used to better target advertising or product recommendations. The concept is likely to add fuel to the debate about the amount and type of personal data scooped up by technology giants, which already collect reams of information about their customers. Earlier this year, Bloomberg reported that Amazon has a team listening to and annotating audio clips captured by the company’s Echo line of voice-activated speakers.
A U.S. patent filed in 2017 describes a system in which voice software uses analysis of vocal patterns to determine how a user is feeling, discerning among “joy, anger, sorrow, sadness, fear, disgust, boredom, stress, or other emotional states.” The patent, made public last year, suggests Amazon could use knowledge of a user’s emotions to recommend products or otherwise tailor responses.
A diagram in the patent filing says the technology can detect an abnormal emotional condition and shows a sniffling woman telling Alexa she’s hungry. The digital assistant, picking up that she has a cold, asks the woman if she would like a recipe for chicken soup.
A second patent awarded to Amazon mentions a system that uses techniques to distinguish the wearer’s speech from background noises. Amazon documents reviewed by Bloomberg say the wearable device will take advantage of such technology.
Amazon’s work on a wearable device underscores its ambitions of becoming a leading maker of both cutting-edge speech recognition software and consumer electronics. The Echo smart speaker line and embedded Alexa voice software have popularized the use of voice commands in the home. The company has also added voice control to Fire-branded video streaming devices for television, as well as tablets.
But Amazon’s efforts to create smartphone software to rival Apple Inc. or Google have failed. So the company is trying to make Alexa ubiquitous in other ways. Bloomberg reported earlier this year that Amazon was developing wireless earbuds, similar to Apple AirPods, that are expected to include the Alexa voice software. The company has begun distributing Echo Auto, a dashboard-mounted speaker and microphone array designed to pair with a smartphone, and says it received 1 million pre-orders.
Amazon has also been working on a domestic robot, Bloomberg reported last year. Codenamed “Vesta,” after the Roman goddess of the hearth, home and family, the bot could be a kind of mobile Alexa, according to people familiar with the project. Prototypes of the robot can navigate through homes like a self-driving car.
Cloud computing has two meanings. The most common refers to running workloads remotely over the internet in a commercial provider’s data center, also known as the “public cloud” model. Popular public cloud offerings—such as Amazon Web Services (AWS), Salesforce’s CRM system, and Microsoft Azure—all exemplify this familiar notion of cloud computing. Today, most businesses take a multicloud approach, which simply means they use more than one public cloud service.
The second meaning of cloud computing describes how it works: a virtualized pool of resources, from raw compute power to application functionality, available on demand. When customers procure cloud services, the provider fulfills those requests using advanced automation rather than manual provisioning. The key advantage is agility: the ability to apply abstracted compute, storage, and network resources to workloads as needed and tap into an abundance of prebuilt services.
The public cloud lets customers gain new capabilities without investing in new hardware or software. Instead, they pay their cloud provider a subscription fee or pay for only the resources they use. Simply by filling in web forms, users can set up accounts and spin up virtual machines or provision new applications. More users or computing resources can be added on the fly—the latter in real time as workloads demand those resources thanks to a feature known as autoscaling.
The array of available cloud computing services is vast, but most fall into one of the following categories.
SaaS (software as a service)
This type of public cloud computing delivers applications over the internet through the browser. The most popular SaaS applications for business can be found in Google’s G Suite and Microsoft’s Office 365; among enterprise applications, Salesforce leads the pack. But virtually all enterprise applications, including ERP suites from Oracle and SAP, have adopted the SaaS model. Typically, SaaS applications offer extensive configuration options as well as development environments that enable customers to code their own modifications and additions.
PaaS provides sets of services and workflows that specifically target developers, who can use shared tools, processes, and APIs to accelerate the development, testing, and deployment of applications. Salesforce’s Heroku and Force.com are popular public cloud PaaS offerings; Pivotal’s Cloud Foundry and Red Hat’s OpenShift can be deployed on premises or accessed through the major public clouds. For enterprises, PaaS can ensure that developers have ready access to resources, follow certain processes, and use only a specific array of services, while operators maintain the underlying infrastructure.
FaaS (functions as a service) definition
FaaS, the cloud version of serverless computing, adds another layer of abstraction to PaaS, so that developers are completely insulated from everything in the stack below their code. Instead of futzing with virtual servers, containers, and application runtimes, they upload narrowly functional blocks of code, and set them to be triggered by a certain event (such as a form submission or uploaded file). All the major clouds offer FaaS on top of IaaS: AWS Lambda, Azure Functions, Google Cloud Functions, and IBM OpenWhisk. A special benefit of FaaS applications is that they consume no IaaS resources until an event occurs, reducing pay-per-use fees.
Private cloud definition
A private cloud downsizes the technologies used to run IaaS public clouds into software that can be deployed and operated in a customer’s data center. As with a public cloud, internal customers can provision their own virtual resources to build, test, and run applications, with metering to charge back departments for resource consumption. For administrators, the private cloud amounts to the ultimate in data center automation, minimizing manual provisioning and management. VMware’s Software Defined Data Center stack is the most popular commercial private cloud software, while OpenStack is the open source leader.
Note, however, that the private cloud does not fully conform to the definition of cloud computing. Cloud computing is a service. A private cloud demands that an organization build and maintain its own underlying cloud infrastructure; only internal usersof a private cloud experience it as a cloud computing service.
Hybrid cloud definition
A hybrid cloud is the integration of a private cloud with a public cloud. At its most developed, the hybrid cloud involves creating parallel environments in which applications can move easily between private and public clouds. In other instances, databases may stay in the customer data center and integrate with public cloud applications—or virtualized data center workloads may be replicated to the cloud during times of peak demand. The types of integrations between private and public cloud vary widely, but they must be extensive to earn a hybrid cloud designation.
Public APIs (application programming interfaces) definition
Just as SaaS delivers applications to users over the internet, public APIsoffer developers application functionality that can be accessed programmatically. For example, in building web applications, developers often tap into Google Maps’s API to provide driving directions; to integrate with social media, developers may call upon APIs maintained by Twitter, Facebook, or LinkedIn. Twilio has built a successful business dedicated to delivering telephony and messaging services via public APIs. Ultimately, any business can provision its own public APIs to enable customers to consume data or access application functionality.
iPaaS (integration platform as a service) definition
Data integration is a key issue for any sizeable company, but particularly for those that adopt SaaS at scale. iPaaS providers typically offer prebuilt connectors for sharing data among popular SaaS applications and on-premises enterprise applications, though providers may focus more or less on B-to-B and e-commerce integrations, cloud integrations, or traditional SOA-style integrations. iPaaS offerings in the cloud from such providers as Dell Boomi, Informatica, MuleSoft, and SnapLogic also let users implement data mapping, transformations, and workflows as part of the integration-building process.
IDaaS (identity as a service) definition
The most difficult security issue related to cloud computing is the management of user identity and its associated rights and permissions across private data centers and pubic cloud sites. IDaaS providers maintain cloud-based user profiles that authenticate users and enable access to resources or applications based on security policies, user groups, and individual privileges. The ability to integrate with various directory services (Active Directory, LDAP, etc.) and provide is essential. Okta is the clear leader in cloud-based IDaaS; CA, Centrify, IBM, Microsoft, Oracle, and Ping provide both on-premises and cloud solutions.
Collaboration solutions such as Slack, Microsoft Teams, and HipChat have become vital messaging platforms that enable groups to communicate and work together effectively. Basically, these solutions are relatively simple SaaS applications that support chat-style messaging along with file sharing and audio or video communication. Most offer APIs to facilitate integrations with other systems and enable third-party developers to create and share add-ins that augment functionality.
Key providers in such industries as financial services, health care, retail, life sciences, and manufacturing provide PaaS clouds to enable customers to build vertical applications that tap into industry-specific, API-accessible services. Vertical clouds can dramatically reduce the time to market for vertical applications and accelerate domain-specific B-to-B integrations. Most vertical clouds are built with the intent of nurturing partner ecosystems.
Other cloud computing considerations
The most widely accepted definition of cloud computing means that you run your workloads on someone else’s servers, but this is not the same as outsourcing. Virtual cloud resources and even SaaS applications must be configured and maintained by the customer. Consider these factors when planning a cloud initiative.
Cloud computing security considerations
Objections to the public cloud generally begin with cloud security, although the major public clouds have proven themselves much less susceptible to attack than the average enterprise data center.
Of greater concern is the integration of security policy and identity management between customers and public cloud providers. In addition, government regulation may forbid customers from allowing sensitive data off premises. Other concerns include the risk of outages and the long-term operational costs of public cloud services.
Multicloud management considerations
The bar to qualify as a multicloud adopter is low: A customer just needs to use more than one public cloud service. However, depending on the number and variety of cloud services involved, managing multiple cloudscan become quite complex from both a cost optimization and technology perspective.
In some cases, customers subscribe to multiple cloud service simply to avoid dependence on a single provider. A more sophisticated approach is to select public clouds based on the unique services they offer and, in some cases, integrate them. For example, developers might want to use Google’s TensorFlow machine learning service on Google Cloud Platform to build machine-learning-enabled applications, but prefer Jenkins hosted on the CloudBees platform for continuous integration.
To control costs and reduce management overhead, some customers opt for cloud management platforms (CMPs) and/or cloud service brokers (CSBs), which let you manage multiple clouds as if they were one cloud. The problem is that these solutions tend to limit customers to such common-denominator services as storage and compute, ignoring the panoply of services that make each cloud unique.
Edge computing considerations
You often see edge computing described as an alternative to cloud computing. But it is not. Edge computing is about moving local computing to local devices in a highy distributed system, typically as a layer around a cloud computing core. There is typically a cloud involved to orchestrate all the devices and take in their data, then analyze it or otherwise act on it.
Benefits of cloud computing
The cloud’s main appeal is to reduce the time to market of applications that need to scale dynamically. Increasingly, however, developers are drawn to the cloud by the abundance of advanced new services that can be incorporated into applications, from machine learning to internet of things (IoT) connectivity.
Although businesses sometimes migrate legacy applications to the cloud to reduce data center resource requirements, the real benefits accrue to new applications that take advantage of cloud services and “cloud native” attributes. The latter include microservices architecture, Linux containersto enhance application portability, and container management solutions such as Kubernetes that orchestrate container-based services. Cloud-native approaches and solutions can be part of either public or private clouds and help enable highly efficient devops-style workflows.
Cloud computing, public or private, has become the platform of choice for large applications, particularly customer-facing ones that need to change frequently or scale dynamically. More significantly, the major public clouds now lead the way in enterprise technology development, debuting new advances before they appear anywhere else. Workload by workload, enterprises are opting for the cloud, where an endless parade of exciting new technologies invite innovative use.
Remember how gravitational force pulls everything that goes up? These laws do not exclude websites too. It is estimated that websites are 0.5% more likely to have a 2 seconds downtime for every ten additional active users. It is essential to understand that these websites are made up of multiple web pages/resource which are available on a remote computer (hardware) and powered by an operating system among other software which is prone to many forms of error, making efficiency and availability of data less than 100%.
Facebook and Instagram appear to be partially down for many users around the world today, 03/13/2019 as at 7:00 PM EST making #Facebookdown trend massively on Twitter, making this outage the longest Facebook has ever experienced. It is unclear how long more this downtime will last, but currently, service had not yet been restored as of this writing.
Netscount’s engineer, Roland Dobbins, a renowned network performance engineer mentioned that the outage was due to an accidental network traffic jam between the cyberspace of a European internet company that collided with Facebook’s servers, mainly impacting Facebook as well as other resources hosted on these servers.
Some users of Whatsapp, a Facebook-owned cross-platform messaging application also reported delays and failure in sending texts and photos on the popular platform. Downdetector.com, a website status monitoring platform showed Facebook and Instagram blackouts in large portions of the world.
Is your website currently down too? Here are a few reasons why your website might be acting up as well as preventive measures to avert these technical discrepancies.
Hacks and Data Breach
Depending on the worth of data on the website, this is always the main reason attackers might be interested in taking down a targeted website. In less organized attackers, key user login accounts may be compromised for the intent of defacing, redirecting or deleting website pages. In more sophisticated attacks, threat actors are geared towards monetary gains and make use of advanced threat methods including malware and rootkits, ransomware or botnets to initiate a DDOS attack.
Website hosting service providers often have scheduled server maintenance which is usually scheduled to commence and end withing off-peak periods, which can vary based on website types. In most cases, hosting providers reserve the rights to communicate this procedure to users. Sadly, this might impact customer productivity and result in loss of income in the case of unresolved outages.
Human and Machine Errors
Everyday websites hire software developers to write codes and consistently improve the UI/UX. Discrepancies in human-machine interactions as a result of mis-coding, bugging, and non-specific error handling and validation methodologies can result in an umbrella of errors.
Since most servers housing these websites still utilize traditional CPU circuits with continually moving parts, hardware failure is unavoidably imminent. Creating a redundant environment for an automated backup/fail-safe fosters continuity and ensure service remain uninterruptible.
DNS Changes or Expirations DNS or Domain Name Services updates typically results in a temporary delay. If a website changes it’s hosting service, the IP address, physical hosting servers, network traffic hops, name servers, among other identifiers change as well. A proactive measure towards this point is budgeting more for domain life span and staying up to date with provider updates, suspension, and expiry notifications.
Server Overloads occur due to enormous traffic accessing a website at a given time. This sudden spike in traffic is usually to ensure safety on both the ends of the hosting provider and the customer. Hosts allocate bandwidth based on a customer’s subscription plan which consumes back end resources. Customers want to uphold the availability of crucial data to ensure users revisit their website.
Websites like Pingdom.com, Isitdownrightnow.com and downdetector.com are handy tools to check website uptime and downtime history. Bearing in mind that most A-list websites will require more scheduled maintenance than websites with less active users, it is important to recall that web building platforms usually have vulnerabilities which might be susceptible to an exploitation or remain in it’s “zero-day” state for a period of time
At 19, Santiago Lopez is already counting earnings totaling over USD 1 million from reporting security vulnerabilities through vulnerability coordination and bug bounty program HackerOne. He’s the first to make this kind of money on the platform.
In 2015 when he was 16-years old, Lopez started to learn about hacking. He is self-taught, his hacker school being the internet, where he watched and read tutorials on how to bypass or defeat security protections.
Two years to get to $1M in bounties
The rewards came a year later when he got a $50 payout for a cross-site request forgery (CSRF) vulnerability. His largest bounty was $9,000, for a server-side request forgery (SSRF).
He spent his first bug bounty money on a new computer, and as he accumulated more in rewards, he moved to cars.
At the moment, he has a record of 1676 distinct vulnerabilities submitted for online assets belonging to big-name companies like Verizon, Automattic, Twitter, HackerOne, private companies, and even to the US government. Lopez ranks second on HackerOne.
A hacker’s work week, tools and experience
In 2018, the researchers on HackerOne earned over $19 million in bounties; the amount is a big jump from the more than $24 million paid in the previous five years. However, the goal of the program is to reach $100 million by the end of 2020.
The recent report from the platform shows that there are over 300,000 registered hackers that submitted more than 100,000 valid vulnerabilities.
Most of the hackers (35.7%) spend up to 10 hours on average per week looking for bugs. A quarter of them works between 10 and 20 hours every week.
According to the survey, the researchers with plenty of experience in cybersecurity, over 21 years, represent the smallest percentage. The majority of the hackers, 72.3% have between one and five years of the experience.
Over 72% of the hackers surveyed by HackerOne for the report look into website security and 6.8% research APIs and technology that holds its own data. The favorite tool of the trade is Burp Suite for testing web apps.
Making money, leaning the ropes, being challenged and having fun are the top reasons for the work of the researchers submitting bugs via HackerOne, while bragging rights fall in the last place.
HackerOne’s 2019 report also shows that cross-site scripting (XSS) is the preferred attack method, followed by SQL injection. The full report is available here.
You may havenoticed this happening more and more lately: Online accounts get taken over in droves, but the companies insist that their systems haven’t been compromised. It’s maddening, but in many cases, technically they’re right. The real culprit is a hacker technique known as “credential stuffing.”
The strategy is pretty straightforward. Attackers take a massive trove of usernames and passwords (often from a corporate megabreach) and try to “stuff” those credentials into the login page of other digital services. Because people often reuse the same username and password across multiple sites, attackers can often use one piece of credential info to unlock multiple accounts. In the last few weeks alone, Nest, Dunkin’ Donuts, OkCupid, and the video platform DailyMotion have all seen their users fall victim to credential stuffing.
“With all of the massive credential dumps that have happened over the past few years, credential stuffing has become a serious threat to online services,” says Crane Hassold, a threat intelligence manager at the digital fraud defense firm Agari. “Most people don’t change their passwords regularly, so even older credential dumps can be used with relative success. And since password reuse is rampant, cybercriminals will generally test a set of credentials against numerous different websites.”
Credential stuffing has been a problem for years now, as troves of credentials from seminal breaches like LinkedIn and Dropbox in 2012 and Myspace in 2013 have been used—to great effect!—in countless credential stuffing campaigns. But one trend in particular has fueled a recent rise in successful campaigns.
Recently hackers have posted more gigantic, aggregated credential collections that comprise multiple data breaches. One of the most wild recent examples is known as Collection #1-5, a “breach of breaches” that totaled 2.2 billion unique username and password combinations, all available to download in plaintext—for free.
“With Collections 1 through 5 we have actually seen spikes in credential stuffing recently, immediately after that news came out,” says Shuman Ghosemajumder, chief technical officer at the corporate digital fraud defense firm Shape Security. “In fact, we saw some of the largest credential stuffing attacks across several customers in just that week. And that makes sense because you’ve got all these plaintext usernames and passwords available through a torrent. It democratizes credential stuffing.”
The Collection credentials are mostly a few years old, meaning many were already in broad circulation and not worth much. But over the last week, another outlandish trove has provided exactly the type of fresh, high-quality credentials hackers cherish. Posted on the Dream Market dark web marketplace, the collection includes a total of roughly 841 million records, released in three batches, from 32 web services, including MyFitnessPal, MyHeritage, Whitepages, and the file-sharing platform Ge.tt. The first part of the dump costs about $20,000 in bitcoin, the second about $14,500, and the third roughly $9,350. A few of the breaches don’t include passwords, and some that do are protected by cryptographic scrambling that buyers will need to decode, but overall these are top-shelf troves ripe for use in credential stuffing.
As you’ve probably guessed, credential stuffing relies on automation; hackers aren’t literally typing in hundreds of millions of credential pairs across hundreds of sites by hand. Credential stuffing attacks also can’t try massive numbers of logins on a site with all the tries coming from the same IP address, because web services have basic rate-limiting protections in place to block floods of activity that could be destabilizing.
So hackers use credential stuffing tools, available on malicious platforms, to incorporate “proxy lists” to bounce the requests around the web and make them look like they’re coming from all different IP addresses. They can also manipulate properties of the login requests to make it look like they come from a diverse array of browsers, because most websites will flag large amounts of traffic all coming from the same type of browser as suspicious. Credential stuffing tools will even offer integrations with platforms built to defeat Captchas.
Credential stuffing campaigns ultimately try to get the malicious requests to blend into the noise of all the legitimate logins happening on a service at any given time, or “simulate the activity of a large population of humans,” as Shape Security’s Ghosemajumder puts it.
It also requires patience; Shape estimates that typically attackers find matches between their test credentials and an account on the platform they are attacking 0.1 to 2 percent of the time. This is why attackers need hundreds of thousands or millions of credential pairs to make credential stuffing attacks worth it. And once they’ve gotten into some accounts, attackers still need a way to monetize what they find there—either by stealing more personal data, money, gift card balances, credit card numbers, and so on—to make the whole thing worthwhile.
Th best way to protect against credential stuffing attacks is to use unique passwords for each of your digital accounts—ideally by using a password manager—and turn on two-factor authentication when it’s available. But it’s not entirely on you. Companies, too, are increasingly attempting to detect and block credential stuffing attempts. And some like Google (which also owns Nest) have started initiatives to proactively check whether users’ account credentials have been compromised in breaches and trigger password resets if they discover a match. But the trick is to do all of this without blocking or hindering legitimate activity.
One strategy companies can deploy is to track logins that ultimately result in fraud, then blacklist the associated IP address. Over time, this can erode the effectiveness of the proxy lists attackers rely on to mask their mass login attempts. This doesn’t completely stop credential stuffing, but does make it more difficult and potentially costly for hackers to carry out the attacks. Services whose users are mainly in specific geographic regions can also establish geofences, blocking proxy traffic that comes in from elsewhere in the world. Once again, though, attackers can ultimately adapt to this restriction as well by switching to using proxy IPs within those areas.
A recent credential stuffing attack against the productivity and project management service Basecamp helps illustrate the problem. The company reported recently that it had faced 30,000 malicious login attempts from a diverse set of IP addresses in a single hour. The company began blocking the IPs as quickly as possible, but needed to implement a Captcha to ultimately end the attack. When the barrage died down, Basecamp found that the attackers had only succeeded in penetrating 124 accounts; the company quickly reset those account passwords to revoke the attackers’ access.
Many companies aren’t as prepared to handle the scale of the credential stuffing threat. Shape Security’s Ghosemajumder says that it’s pretty typical at this point for corporate clients to see 90 percent of their logins come from malicious attacks. He has even worked with customers who deal with credential stuffing in 99.9 percent of login attempts to their service. And while credential dumps from leaks and breaches are the primary fuel for these attacks, criminals can also diversify their approach by using credential pairs gathered from phishing attacks.
“Most credential stuffing uses information obtained from the major data breaches,” Agari’s Hassold says. “But over the past few years there has been a shift in the credential phishing landscape to target generic account credentials that are then ‘stuffed’ into a number of different websites.”
Though it is frustrating when companies insist that they haven’t been breached and deny responsibility for protecting their users from credential stuffing attacks, the truth is that service providers don’t have a foolproof way of defending against this threat. As Basecamp’s CTO and co-founder David Heinemeier Hansson put it after the service’s recent incident, “Our ops team will continue to monitor and fight any future attacks. … But if someone has your username and password, and you don’t have 2FA protection, there are limits to how effective this protection can be.”
For such a simple technique, credential stuffing is frustratingly difficult to quash. So keep your passwords as diverse as possible and use two-factor whenever you can. And complain loudly on social media about any web service that isn’t offering it.
It’s time. We’ve rounded up all our best games of 2018, then followed that up with another bunch of games you might’ve missed. We’ve done plenty of retrospective to close out the year. Now it’s our chance to look ahead at a packed spring schedule (and beyond), rounding up all the games we’re most excited about for 2019.
That part is key: Most excited about. That means you’ll find some obvious picks here, like Metro Exodus. You’ll also find some smaller, more niche picks like Disco Elysium, Heaven’s Vault, and The Occupation. And it means this is not a comprehensive list. It’s just our favorites.
Sorry in advance if we cut your favorite game from the list.
Resident Evil 2 – January 25
The first major PC release of 2019 is Capcom’s Resident Evil 2 remake ($60 preorder on Humble), due to release at the end of January. It’s probably the safest possible bet Capcom could make after the bold first-person pivot of Resident Evil VII. The Resident Evil 2 remake brings back all the fans’ old favorites. Leon’s here! And Claire! And Ada Wong! And Raccoon City! Also, it’s been redone to use the over-the-shoulder camera from Resident Evil IV!
It’s like a mashup of everyone’s favorite Resident Evils. That’s less exciting (to me at least) than a proper Resident Evil VII follow-up, but it’ll be great to have this classic story playable on modern machines, and with mechanics befitting a 2019 video game. So long, fixed camera angles. Adios, tank controls. We can do better now.
The Occupation – February 5
The Occupation was supposed to release in October. Now it’s supposed to release in February. I don’t think anyone even announced a delay—it just slipped into the future as if the original date never existed, the perfect way to delay a game that’s about a corrupt government cracking down on civil liberties to keep citizens safe.
Delay or no, The Occupation‘s still one of my most anticipated games for 2019. The game takes place over four real-time hours, with characters and events sticking to a strict schedule. You play a journalist, trying to uncover the facts behind a deadly crime—but you need to make decisions about what leads to pursue and how to follow them. Do you meet with the government official you have an appointment with? Or perhaps blow them off and root through a colleague’s empty office?
I’ve played a lot of so-called “immersive sims” over the years, but none as ambitious as The Occupation. I hope the delay gave the team enough time to fine-tune the details.
Metro Exodus – February 15
Usually these lists become outdated because of delays, but not this time. The day after we recorded our 2019 preview video, Metro Exodus ($60 preorder on Humble) announced it was moving its release date up a week, from February 22 to February 15. That takes it out of competition with Anthem and puts it back up against Crackdown 3, as well as Far Cry: New Dawn.
Metro is the one I’m looking forward to most though. I loved the cramped corridor shooting of Metro 2033 and Last Light, and while I’m a bit less enamored with the idea of a pseudo-open-world Metro game I’m curious to see whether it works, guiding Artyom on some grand journey through the Russian countryside.
Far Cry: New Dawn – February 15
Metro Exodus ’s strongest competition, Far Cry: New Dawn ($40 preorder on Humble) releases the same day with a brighter and goofier take on the post-apocalypse. And you know what? I’m kind of looking forward to it. I think Far Cry’s serious numbered entries are mostly mediocre (especially Far Cry 5) but the gimmicky spin-offs like Blood Dragon and Primal are interesting experiments—even when they don’t quite work out.
So a post-apocalyptic Far Cry? One that’s set on the same map as Far Cry 5, but without all the political and religious overtones? It probably won’t break new ground for the series or for games as a whole, but it at least sounds like a decently fun time. And hey, Fallout 76 set the bar pretty low, so…
Anthem – February 22
Once upon a time February 22 was supposed to be the crowded day, but first Crackdown 3 dipped to February 15 and then Metro followed suit. Now only Anthem ($60 preorder on Origin) remains, BioWare’s take on a Destiny-style shooter—except maybe with a better story? That’s a pretty thin maybe, based on what I’ve seen so far, but I’m still holding out some hope. It is BioWare, after all.
We really don’t know though. BioWare’s been reticent about showing off Anthem’s story, instead focusing on how it plays. And I can say: It plays great. At our E3 demo I claimed Anthem plays “even smoother than Destiny,” which is high praise coming from me. Rocketing around in my little mech, strafing waterfalls and diving underwater, then exploding back out of a pool to shoot some nearby foes—it’s effortless.
But I loved the shooting in Mass Effect: Andromeda and not much else, so…well, I hope the story’s decent. Fingers crossed.
The Sinking City – March 21
Frogwares’s Sherlock Holmes series is the closest I’ve come to a gaming guilty pleasure. They’re low budget, often buggy, the cases you solve hit-or-miss, and the mechanics for finding a solution even more inconsistent. And yet they often rise above their station, delivering excellent character moments for Holmes and Watson, or seizing on a neat detective game gimmick (like Crimes and Punishmentswith its red herring endings).
Point being: I’m always interested in what Frogwares is up to, even if the results aren’t perfect. And with Cyanide’s 2018 Call of Cthulhugame a mess, that makes Frogwares’s Sinking City our best hope for a truly unsettling mythos experience. The cinematic trailer below gives me no idea whether this is mostly an action game or a detective game, but I’m at least excited to find out.
Sekiro: Shadows Die Twice – March 22
Dark Souls is dead. Long live Dark Souls. If you believe From Software, the Dark Souls series is finished forever. That doesn’t mean From Software is done making that style of game though.
Enter Sekiro: Shadows Die Twice ($60 preorder on Steam). It’s not a Souls game, but Sekiro takes those ideas—deliberate combat, pattern recognition, grand boss battles, impenetrable lore—and transposes them to Japan’s Sengoku period. It is, in so many ways, recognizable as a From Software game.
And yet it’s not afraid to deviate from Dark Souls as well. Exploration is more active, as your character has a grappling hook-arm that allows him to leap to rooftops and branches or swing across gaps. That, in turn, makes stealth a viable option—either bypassing enemies entirely or leaping down on them unawares for a quick kill.
Mortal Kombat XI – April 23
We don’t know much about Mortal Kombat XI yet. Announced in December at The Game Awards, all we’ve seen is a single CGI trailer of Dark Raiden fighting two Scorpions. That means uh…well, Dark Raiden and Scorpion are in the game. It also seems like the character customization elements of Injustice 2 will make it over to this latest Mortal Kombat.
But what will the campaign look like? That’s what I’m most curious to see. The seamless cinematic-driven campaigns of Mortal Kombat IX andX were great, but after four games (including the Injustices) it seems like it might be time for a shakeup. Rumors claim Mortal Kombat XI will include a full-on adventure mode with a map to explore, a la 2005’s Shaolin Monks, but we’ll see.
The question is whether the story can pull its weight as well. Lest we forget, the first Rage played pretty well. It was just boring as hell. Rage 2 seems to be shifting towards a quirkier Borderlands-lite style of humor, which might help propel the action along…or might get old quick. It’s hard to tell.
Either way, I’m looking forward to Rage 2—and that’s a sentence I never thought I’d write a year ago.
Following tech news can feel like living on the DMZ between a utopia and the apocalypse. We’re always one public scandal, earnings call, or product announcement away from tipping the scales in either direction. This year, both sides showed up in full force.
Gatwick Airport, the UK’s second largest airport, has just become a key example of how thoroughly today’s consumer tech can disrupt our infrastructure. The airport briefly suspended all flights again Friday, the third time in three days, due to suspected drone sightings in the area. That’s right — drones were able to shut down a major UK artery for well over 24 hours, as police and armed forces have seemingly been unable to find those responsible.
The internet is indeed an e-world of its own. As of 2012, a survey by Netcraft, a provider of cybercrime disruption services across a wide range of industries based in the UK showed that a total number of 144,000 websites launched daily, which amounts to over 51 million annually.
As of January 2018, (6 years later) the figure stood at 1,805,260,010 (over 1.8 billion) websites. Some of these websites grow big enough to rank among the world wide web’s top 500. Sadly, the rest of these websites get almost no visitors and rank lower not because they suck that bad, but just because the top can only fit too many at a time.
Below is a carefully researched, compiled and comprehensive list of 10 useful websites you wish you knew earlier.
1. The Internet Map
If not the coolest website on the internet right now, the internet map, designed by Ruslan Enikeev for a personal non-commercial project just as the name implies is indeed a map of the internet.
The designer claims that this website continuously archives all other sites on the internet, representing them in dots. The sizes of the dots depict the ranking of the websites according to Alexa (Website ranking Algorithm by Amazon) making Google, Facebook among others a distinct turquoise sphere among the rest.
2. Radio Garden
Ever been curious enough to imagine how listening to radio stations from other countries sound? The user interface is quite intuitive, featuring a dynamic world map of live radio across the globe. It has navigation similar to google earth and unique features including Add favorite stations, history lookup, jingle mode, RDS, and mute mode guaranteed to make you want to bookmark this website immediately.
Asides most social media websites, Radio Garden is ranked as one of the very few controversial sites where users get payable contents for free. The Radio Garden has a similar working concept as radiooooo.com asides the fact that radiooooo lets you choose your desired year and genre of radio.
3. Internet’s first website
The http://info.cern.ch/hypertext/WWW/TheProject.html created by Tim Berners-Lee is the home of the first website. Considering how there are over 1.8 billion websites in 2018, there was none 27 years ago. This first web page of the internet, published on August 6, 1991, was landmark informing the World of the world wide web project and ran on a NeXT computer at the European Organization for Nuclear Research, CERN. It comprises steps on how to create Web pages and explained the meaning of a hypertext.
In the absence of CSS, and simplified website builders including Dreamweaver, Elementor, Divi, and Envato, you should prepare your mind for something ‘amazing,’ especially before attempting to open this website.
4. Web Oasis
Most times, it gets boring staring at that static google.com home page right? How about making https://weboas.is/ your homepage instead?
Asides the cool hacking theme, Web Oasis has prebuilt bookmarks of most websites across the internet with clear navigation links which unveil on mouse hover plus a fully customizable user interface/elements, an add-on for everyday use including News, Tech, Radio, Crypto, quick notepad editor, Weather, Finance, a secure password generator, and even an arcade game.
It also has an embedded chat room, a 2-character shortcut search engine mode, and a section on the screen’s top right corner showing your local system information. Now, this is the real Google, literally housing all of your wants on a single website.
If Cymath was available decades earlier than 2013, then the internet would have been a better place, especially for students looking for a step-by-step approach towards the solution to their mathematics problems. Cymaths is every student’s dream plus you can have all your assignments done, be it graphs or equations.
It’s inventors believe in the ideology of open education, and that every student deserves math help that is reliable and accessible, powered by a combination of artificial intelligence and heuristics, so that it solves math problems step-by-step like a teacher would.
The fact this that this website is available on the surface web is amusing. Konboot prides themselves as the world’s best remedy for forgotten passwords for a simple reason – it bypasses the authentication process of your (or probably not your) operating system without overwriting your old password or leaving a digital footprint.
Technically, this website lets you log in to any Windows or Mac Operating system with full rights without prior knowledge of the machine’s password. Konboot is designed primarily for tech repairs, forensic teams, and security audit reasons. Piotr Bania is the mastermind behind this rare tool.
7. User testing
Finally, a freebie on the internet that isn’t a hoax? Except for the fact that this isn’t free money, you earn it. User Testing or usability testing pays between $10 – $30 for every website you test. The goal of user testing is the get a digital product in front of a customer as early as possible.
Users are asked to perform a specific task that simulates real-world usage of usually a website. These tasks can be as easy as opening multiple pages across a selected website while having a voice and screen capture, A/B tests, preference tests and eventually taking a UI/UX review questionnaire afterward. These tests take less than 10 minutes to complete, no experience is required, and the is no cap on the number of tests a user can take per day.
Unlike Amazon’s Alexa, which ranks websites with algorithms based off of web statistics, visits, relevance, and SEO optimization strategy, Awwwards typically accepts website submissions and allow users to rate these sites based on four distinct features: design, usability, creativity, and content.
Awwwards is the abode of a vast collection of mind-blowing websites across the internet where users not only get a chance to rate them based on design, creativity, and innovation on the internet but also gather unexplored ideas regarding their next projects. Users are also able to query and search directories based on their respective niche as well as hire and apply for website design positions site wide.
9. Rhyme Zone
Are you a Poet, song lyricist, into essay writing, a rapper, or just looking for rhythm? Then you should try out Rhyme Zone. RhymeZone is arguably the best and fastest way to find English words for any writing. It has been running continuously since 1996. It is a concise guide for finding corresponding rhymes, antonyms, synonyms, descriptive words, definition, thesaurus, lyrics, poems, homophones, similar sounding words, related words, similar spellings, picture search, Shakespearean novel search, and letter matching.
10. Library Genesis
Library Genesis is a search engine for the biggest archive of free e-books on the internet allowing free access to content that is otherwise paywalled or not digitized anywhere else on the internet.
Irrespective of the type of books you read; novels, tech, educational material, LibGen (Sci-Tech), Scientific articles, Fiction, Comics Standards, and Magazines, you are rest assured such books reside here. LibGen initially used the domain name libgen.org but was forced to shut down and to suspend use of the domain name due to copyright issues from authors In late October 2015. The LibGen website is blocked by a handful of ISPs in the UK for obvious reasons. As of 5 June 2018, Library Genesis claims its database contains over 2.7 million books and 58 million science magazine files.
Bottomline: Now that you’ve probably bookmarked these rare but real websites, spread the love by telling someone about this today.