Five 2018 Predictions — on GDPR, Robot Cars, AI, 5G and Blockchain

Predictions are like buses, none for ages and then several come along at once. Also like buses, they are slower than you would like and only take you part of the way. Also like buses, they are brightly coloured and full of chatter that you would rather not have in your morning commute. They are sometimes cold, and may have the remains of somebody else’s take-out happy meal in the corner of the seat. Also like buses, they are an analogy that should not be taken too far, less they lose the point. Like buses.

With this in mind, here’s my technology predictions for 2018. I’ve been very lucky to work across a number of verticals over the past couple of years, including public and private transport, retail, finance, government and healthcare — while I can’t name check every project, I’m nonetheless grateful for the experience and knowledge this has brought, which I feed into the below. I’d also like to thank my podcaster co-host Simon Townsend for allowing me to test many of these ideas.

Finally, one prediction I can’t make is whether this list will cause any feedback or debate — nonetheless, I would welcome any comments you might have, and I will endeavour to address them.

1. GDPR will be a costly, inadequate mess

Don’t get me wrong, GDPR is a really good idea. As a lawyer said to me a couple of weeks ago, it is a combination of the the UK data protection act, plus the best practices that have evolved around it, now put into law at a European level with a large fine associated. The regulations are also likely to become the basis for other countries — if you are going to trade with Europe, you might as well set it as the baseline, goes the thinking. All well and good so far.

Meanwhile, it’s an incredible, expensive (and necessary, if you’re a consumer that cares about your data rights) mountain to climb for any organisation that processes or stores your data. The deadline for compliance is May 25th, which is about as likely to be hit as I am going to finally get myself the 6-pack I wanted when I was 25.

No doubt GDPR will one day be achieved, but the fact is that it is already out of date. Notions of data aggregation and potentially toxic combinations (for example, combining credit and social records to show whether or not someone is eligible for insurance) are not just likely, but unavoidable: ‘compliant’ organisations will still be in no better place to protect the interests of their customers than currently.

The challenges, risks and sheer inadequacy of GDPR can be summed up by a single tweet sent by otherwise unknown traveller — “If anyone has a boyfriend called Ben on the Bournemouth – Manchester train right now, he’s just told his friends he’s cheating on you. Dump his ass x.” Whoever sender “@emilyshepss” or indeed, “Ben” might be, the consequences to the privacy of either cannot be handled by any data legislation currently in force.

2. Artificial Intelligence will create silos of smartness

Artificial Intelligence (AI) is a logical consequence of how we apply algorithms to data. It’s as inevitable as maths, as the ability our own brains have to evaluate and draw conclusions. It’s also subject to a great deal of hype and speculation, much of which tends to follow that old, flawed futurist assumption: that a current trend maps a linear course leading to an inevitable conclusion. But the future is not linear. Technological matters are subject to the laws of unintended consequences and of unexpected complexity: that is, the future does not follow a linear path, and every time we create something new, it causes new situations which are beyond its ability to deal with.

So, yes, what we call AI will change (and already is changing) the world. Moore’s, and associated laws are making previously impossible computations now possible, and indeed, they will become the expectation. Machine learning systems are fundamental to the idea of self-driving cars, for example; meanwhile voice, image recognition and so on are having their day. However these are still a long way from any notion of intelligence, artificial or otherwise.

So, yes, absolutely look at how algorithms can deliver real-time analysis, self-learning rules and so on. But look beyond the AI label, at what a product or service can actually do. You can read Gigaom’s research report on where AI can make a difference to the enterprise, here.

In most cases, there will be a question of scope: a system that can save you money on heating by ‘learning’ the nature of your home or data centre, has got to be a good thing for example. Over time we shall see these create new types of complexity, as we look to integrate individual silos of smartness (and their massive data sets) — my prediction is that such integration work will keep us busy for the next year or so, even as learning systems continue to evolve.

3. 5G will become just another expectation

Strip away the techno-babble around 5G and we have a very fast wireless networking protocol designed to handle many more devices than currently — it does this, in principle, by operating at higher frequencies, across shorter distances than current mobile masts (so we’ll need more of them, albeit in smaller boxes). Nobody quite knows how the global roll-out of 5G will take place — questions like who should pay for it will pervade, even though things are clearer than they were. And so on and so on.

But when all’s said and done, it will set the baseline for whatever people use it for, i.e. everything they possibly can. Think 4K video calls, in fact 4K everything, and it’s already not hard to see how anything less than 5G will come as a disappointment. Meanwhile every device under the sun will be looking to connect to every other, exchanging as much data as it possibly can. The technology world is a strange one, with massive expectations being imposed on each layer of the stack without any real sense of needing to take responsibility.

We’ve seen it before. The inefficient software practices of 1990’s Microsoft drove the need for processor upgrades and led Intel to a healthy profit, illustrating the vested interests of the industry to make the networking and hardware platforms faster and better. We all gain as a result, if ‘gain’ can be measured in terms of being able to see your gran in high definition on a wall screen from the other side of the world. But after the hype, 5G will become just another standard release, a way marker on the road to techno-utopia.

On the upside, it may lead to a simpler networking infrastructure. More of a hope than a prediction would be the general adoption of some kind of mesh integration between Wifi and 5G, taking away the handoff pain for both people, and devices, that move around. There will always be a place for multiple standards (such as the energy-efficient Zigbee for IoT) but 5G’s physical architecture, coupled with software standards like NFV, may offer a better starting point than the current, proprietary-mast-based model.

4. Attitudes to autonomous vehicles will normalize

The good news is, car manufacturers saw this coming. They are already planning for that inevitable moment, when public perception goes from, “Who’d want robot cars?” to “Why would I want to own a car?” It’s a familiar phenomenon, an almost 1984-level of doublethink where people go from one mindset to another seemingly overnight, without noticing and in some cases, seemingly disparaging the characters they once were.  We saw it with personal computers, with mobile phones, with flat screen TVs — in the latter case, the the world went from “nah, thats never going to happen” to recycling sites being inundated with perfectly usable screens (and a wave of people getting huge cast-off tellies).

And so, we will see over the next year or so, self-driving vehicles hit our roads. What drives this phenomenon is simple: we know, deep down, that robot cars are safer — not because they are inevitably, inherently safe, but because human drivers are inevitably, inherently dangerous. And autonomous vehicles will get safer still. And are able to pick us up at 3 in the morning and take us home.

The consequences will be fascinating to watch. First that attention will increasingly turn to brands — after all, if you are going to go for a drive, you might as well do so in comfort, right? We can also expect to see a far more varied range of wheeled transport (and otherwise — what’s wrong with the notion of flying unicorn deliveries?) — indeed, with hybrid forms, the very notion of roads is called into question.

There will be data, privacy, security and safety ramifications that need to be dealt with — consider the current ethical debate between leaving young people without taxis late at night, versus the possible consequences of sharing a robot Uber with a potential molester. And I must recall a very interesting conversation with my son, about who would get third or fourth dibs at the autonomous vehicle ferrying drunken revellers (who are not always the cleanliest of souls) to their beds.

Above all, business models will move from physical to virtual, from products to services. The industry knows this, variously calling vehicles ‘tin boxes on wheels’ while investing in car sharing, delivery and other service-based models. Of course (as Apple and others have shown), good engineering continues to command a premium even in the service-based economy: competition will come from Tesla as much as Uber, or whatever replaces its self-sabotaging approach to world domination.

Such changes will take time but in the short term, we can fully expect a mindset shift from the general populace.

5. When Bitcoins collapse, blockchains will pervade

The concept that “money doesn’t actually exist” can be difficult to get across, particularly as it makes such a difference to the lives of, well, everybody. Money can buy health, comfort and a good meal; it can also deliver representations of wealth, from high street bling to mediterranean gin palaces. Of course money exists, I’m holding some in my hand, says anyone who wants to argue against the point.

Yet, still, it doesn’t. It is a mathematical construct originally construed to simplify the exchange of value, to offer persistence to an otherwise transitory notion. From a situation where you’d have to prove whether you gave the chap some fish before he’d give you that wood he offered, you can just take the cash and buy wood wherever you choose. It’s not an accident of speech that pond notes still say, “I promise to pay the bearer on demand…”

While original currencies may have been teeth or shells (happy days if you happened to live near a beach), they moved to metals in order to bring some stability in a rather dodgy market. Forgery remains an enormous problem in part because we maintain a belief that money exists, even though it doesn’t. That dodgy-looking coin still spends, once it is part of the system.

And so to the inexorable rise of Bitcoin, which has emerged from nowhere to become a global currency — in much the same way as the dodgy coin, it is accepted simply because people agree to use it in a transaction. Bitcoin has a chequered reputation, probably unfairly given that our traditional dollars and cents are just as likely to be used for gun-running or drug dealing as any virtual dosh. It’s also a bubble that looks highly likely to burst, and soon — no doubt some pundits will take that as a proof point of the demise of cryptocurrency.

Their certainty may be premature. Not only will Bitcoin itself pervade (albeit at a lower valuation), but the genie is already out of the bottle as banks and others experiment with the economic models made possible by “distributed ledger” architectures such as The Blockchain, i.e. the one supporting Bitcoin. Such models are a work in progress: the idea that a single such ledger can manage all the transactions in the world (financial and otherwise) is clearly flawed.

But blockchains, in general, hold a key as they deal with that single most important reason why currency existed in the first place — to prove a promise. This principle holds in areas way beyond money, or indeed, value exchange — food and pharmaceutical, art and music can all benefit from knowing what was agreed or planned, and how it took place. Architectures will evolve (for example with sidechains) but the blockchain principle can apply wherever the risk of fraud could also exist, which is just about everywhere.

6. The world will keep on turning

There we have it. I could have added other things — for example, there’s a high chance that we will see another major security breach and/or leak; augmented reality will have a stab at the mainstream; and so on. I’d also love to see a return to data and facts on the world’s political stage, rather than the current tub-thumping and playing fast and loose with the truth. I’m keen to see breakthroughs in healthcare from IoT, I also expect some major use of technology that hadn’t been considered arrive, enter the mainstream and become the norm — if I knew what it was, I’d be a very rich man. Even if money doesn’t exist.

Truth is, and despite the daily dose of disappointment that comes with reading the news, these are exciting times to be alive. 2018 promises to be a year as full of innovation as previous years, with all the blessings and curses that it brings. As Isaac Asimov once wrote, “An atom-blaster is a good weapon, but it can point both ways.”

On that, and with all it brings, it only remains to wish the best of the season, and of 2018 to you and yours. All the best!


Photo credit: Birmingham Mail

Up to Core i7-7700, GeForce GTX 1070 Ti, RGB LEDs

ZOTAC this past week formally introduced its first family of upgradeable small form-factor desktops for gamers. The ZOTAC MEK1 systems will come in two configurations, each featuring Intel’s Kaby Lake CPUs as well as NVIDIA’s Pascal GPUs. The MEK1 systems use off-the-shelf components and therefore can be easily upgraded by end users when they need to.

The ZOTAC MEK1 Gaming PCs will come in Black and White chassis themed after “future robotics and mechanical anatomy”. Both systems are based on the same Mini-ITX motherboard featuring Intel’s B250 PCH, they are equipped with 16 GB of dual-channel DDR4-2400 memory, a 240 GB PCIe 3.0 x4 SSD, as well as a 1 TB 2.5” HDD. Meanwhile, the MEK1 Black model is equipped with Intel’s Core i7-7700 processor as well as ZOTAC’s GeForce GTX 1070 Ti graphics card, whereas the MEK1 White is powered by the Core i5-7400 and the GeForce GTX 1060 6 GB. ZOTAC’s MEK1 systems rely on air cooling and take advantage of carefully managed airflows inside the case. The graphics card is installed above the compartment with the CPU and M.2 SSD, so its heat does not affect said devices.



When it comes to connectivity, the MEK1 Black and the MEK1 White systems are identical: they have an 802.11ac Wi-Fi + Bluetooth 4.2 module, two GbE controllers, six USB-A 3.0 ports, two USB-A 2.0 connectors, one PS/2 input, one HDMI 2.0b output, a DL DVI-D header, three DisplayPort 1.4 outputs, analog and S/PDIF audio connectors and so on. For some reason, ZOTAC decided not to equip its MEK1 desktops with USB 3.1 Gen2 Type-C connectors that are present on a number of its other products (1, 2). Some might say that there are not a lot of USB-C peripherals just now, but when you design a PC, you have to think about user experience throughout its lifetime of at least three years. If customers do not enjoy it at the end of its lifespan, they may not come back to ZOTAC for a new one. Meanwhile, USB-C will be widespread three years down the road.

Following the latest trends, both MEK1 PCs feature ZOTAC’s Spectra RGB LED lighting that can be customized using a special utility. To complement the design, MEK1 desktops will come a mechanical keyboard and an optical mouse that match their colors and feature built-in lighting.

ZOTAC Mek1 Desktops
    Mek1 White
CPU Intel Core i5-7400
3 GHz/3.3 GHz
65 W
Intel Core i7-7700
3.6 GHz/4.2 GHz
65 W
PCH Intel B250
Graphics NVIDIA GeForce GTX 1060
1280 stream processors
80 texture units
48 ROPs
192-bit memory interface
6 GB of GDDR5 9 GT/s memory
NVIDIA GeForce GTX 1070 Ti
2432 stream processors
152 texture units
64 ROPs
256-bit memory interface
8 GB of GDDR5 8 GT/s memory
Memory 16 GB of DDR4-2400
Storage 240 GB PCIe 3.0 x4 SSD
1 TB 2.5″ SATA HDD
Wi-Fi 802.11ac + BT 4.2
Ethernet Two Gigabit Ethernet with RJ45 connector
Display Outputs 1 × DVI-D DL
3 × DisplayPort 1.4
1 × HDMI 2.0
Audio 7.1-channel audio with mini-jack and S/PDIF connector
USB 2 × USB 3.0 Type-A (Front)
4 × USB 3.0 Type-A (Back)
2 × USB 2.0 Type-A (Back)
Other I/O PS/2
RGB Lighting ZOTAC Spectra
Dimensions Height 393.7 mm | 15.5″
Depth 414.02 mm | 16.3″
Width 118.11 mm | 4.65″
PSU 450 W SFX 80+ Bronze
OS Windows 10 Home 64-bit

Originally a maker of video cards, today ZOTAC is well known for its compact gaming and office systems. The company’s lineup of PCs is very broad and includes models featuring Intel’s Core i7 CPUs and NVIDIA’s high-end mobile graphics solutions that deliver performance comparable to that of desktop GPUs, but at a lower TDP. Unfortunately, such systems are hard to upgrade because NVIDIA does not allow partners to sell MXM modules to end-users. Therefore, to address people who might want an upgrade path for their PCs, ZOTAC developed its MEK1 systems trying to bring together performance, upgradeability and compact sizes while keeping the price in check. Obviously, some compromises had to be made.

To keep the system sleek, ZOTAC had to use processors with up to 65 W TDP and avoid Intel’s unlocked models that generate up to 95 W of heat or more. Since Intel is gradually increasing performance of its CPUs, TDP constraint is not a problem per se. Since the company uses a motherboard based on Intel’s B250 PCH, the MEK1 systems cannot support Intel’s six-cores Coffee Lake process, eliminating any upgrade options for the Black model and limiting them for the White SKU. One of the reasons why ZOTAC had to choose the Kaby Lake/B250 combination was timing — the desktops have been in development for quite a while. Another reason is availability constraints of Intel’s latest CPUs. ZOTAC’s parent company PC Partner can develop an Intel Z370-based motherboard relatively fast, but if it does not have enough CPUs, ZOTAC cannot sell product, so the safe bet is to go with the Kaby Lake.

The GPU upgrade path is of course considerably simpler — graphics processors to be released in the coming years will be compatible with a PCIe 3.0 x16 interface. Obviously, the MEK1 can accommodate only compact video cards, but they are not rare this days and the system’s 450 W PSU should be enough even for products like the GeForce GTX 1080 Ti (assuming that it can fit in). As for DRAM, M.2 SSD and HDD, their upgrade is as simple as installing new components into an appropriate slot or bay.

ZOTAC plans to start selling its MEK1 systems in the coming weeks. The company did not disclose pricing, but we have reached out ZOTAC and will update the story when we get the information.

Related Reading:

Quad Processor Server: Best for Multitasking Work

This processor was intended for the most recent computers for greatest efficiency whilst maximizing also electricity power saving owing to its advanced feature capability. To begin with, you must make sure the new processor you would like can be used with your present motherboard. A number of affordable quad processor server can be purchased from online stores. This 32-bit processor was the most innovative processor employed in many computers which were manufactured then. There are lots of processors intended for the mid-range user, who wants a workstation, which can take care of lots of multitasking workload.

Rackmount servers are precisely what you should be studying. A lot of these devices, however, come with numerous cores. It is these contemporary low cost servers which may be subject to very higher software license expenses. An advertising department manager reports that the advertising department database has incorrect data. These quad processor servers in comparison to others we’ve researched are unquestionably the best price. It’s important to know why these caches are vital to making your gaming laptop run far better.

AMD is far behind with respect to technology at this time and it’s strongly recommended that you go for an Intel processor instead, since it will certainly provide far better performance. Intel delivers several processors that are commonly held to be a number of the greatest CPU’s around for PC gaming systems. The motherboard connects the rest of the components to one another, such as, for instance, a brain or heart. Technically however a motherboard is a complicated item of technology which exists in virtually every electronic device. The motherboard is easily the most important part of the computer. Even though most motherboards support just 1 CPU socket, some applications gain from having more than 1 processor to attack the tasks at hand. Chipsets are already embedded into your logic board the majority of the moment, meaning you must select your chipset at precisely the same time you select your processor.

When it isn’t then you can either look for a different processor that’s compliant, or change out your motherboard with something which’s compatible by it. The very first direction targeted at gaming and the way the dual core processor server will improve the gaming the capacity of the computer. Besides, these hexa core processors are inclined to be fiendishly costly.

If you are likely to re-purpose your previous computer parts then there are a couple things you would like to avoid. The computers weren’t on the net or a LAN. You will need to choose what sort of computer you desire. Larger computers are usually available with quad cores. If you’re re-using your previous computer then you have to strip out many of the extra expansion cards that you’ve installed over time. The majority of people buying cheap laptops will often need to acquire the very best value for money. For the normal user, obtaining a laptop that isn’t too sluggish and that may be on the world wide web is easily the most important thing.

Gaming companies outsmart DDoS attack with new software security solutions

New releases in the online gaming industry are highly anticipated events. Millions of gamers anxiously waiting to leap onto a shiny new game service is an irresistible target for hackers—with bragging rights being the prize. But for the gaming companies, suffering a DDoS attack is a disaster with immediate loss of revenue, mitigation costs and long-term consequences for their brand. Fortunately, new approaches to security based on multi-dimensional analytics and traffic modeling using big data are changing how this game is played.

The DDoS danger

Global gaming companies build excitement with big, heavily-marketed release dates. This brings millions of players online at the exact same time. During these traffic surges, gaming companies also see a surge of distributed denial of service (DDoS) attacks. Being able to surgically shutting down the attacks without disrupting service is critical.

Successful DDoS attacks can have immediate revenue implications, but more importantly, they hurt their customer base—and even a small number of grumpy gamers can do a lot of damage to the brand online. Growing the player base is essential for having a healthy game launch, especially in the highly competitive gaming industry. So losing customers due to an inaccessible service or bad PR can have serious consequences for any game — just look at Diablo 3, which took years to recover from its self-inflicted “Error 37” fiasco.

Gaming companies generally operate worldwide, serving millions of users. To avoid latency, they distribute their platforms onto multiple region-based servers. DDOS attacks can attack all or some of these servers concurrently, or can focus the attack on different layers of the service to weaken it to the point of being unusable.

A multi-vector attack might, for instance, use hijacked Internet of Things (IoT) devices reprogrammed to participate in the attack as well as hundreds of cloud servers with 10 Gbps uplinks to launch a simultaneous TCP/IP attack, as occurred in last year’s infamous DYN attack.

The outdated defense

Hardware mitigation solutions were not designed for the cloud and IoT era and are, unfortunately, too simplistic to keep up with these types of sophisticated threats.

When gaming companies suffer these DDoS attacks, the current common defense is to backhaul all traffic suspected of being infected to a scrubbing center where racks of purpose-built mitigation machines clean it in a single pass through. Attack detection starts with a baseline measure for what constitutes “normal” and then looks for anomalies, such as sudden large spikes in traffic. The affected traffic is then re-directed and backhauled to the scrubbers.

There is nothing elegant about this approach; it is slow and it suffers from a lot of false positives, meaning the unnecessary backhauling of large amounts of uninfected traffic. The detection hardware lacks the raw compute power required to perform the additional analytics needed to separate out the false positives. And, as the scale of DDoS attacks escalates, these inefficiencies become increasingly costly to gaming companies, since the system has to spend resources fighting phantom attacks, instead of identifying and dealing with other attack vectors.

A more efficient solution

A more elegant and faster approach exists using software-based multi-dimensional analytics, making detection more precise. They combine real-time network telemetry with advanced network analytics and other data such as DNS and BGP (among others) to see down to the source of attack traffic in real time.

Multi-dimensional analytics provide visibility into cloud applications and services and can instantly identify where the traffic is originating, determining whether it is friend or foe. Additionally, big data approaches to traffic modeling can help compare a potential event to past attack profiles and be more precise about what degree of variability from ‘normal’ is OK.

Armed with this kind of analysis, it becomes possible to create simple, effective filters at the peering edge of the network for the zombie PCs, IoT devices and/or cloud servers that are carrying out the attack. The offending traffic doesn’t have to be sent to the scrubbers; it is simply blocked at the edge. And every vector of the attack can be identified, pinpointing the attack endpoints and allowing for surgically precise mitigation. The ability to identify the endpoints of the attack in real-time means that rapidly changing attack vectors can also be identified and counteracted as the attackers attempt to play cat and mouse with network security operations.

This is a high stakes game that is escalating with the spread of inexpensive, insecure cloud services (<10 GB) and IoT devices. DDoS botnets have evolved beyond infecting PCs and now use IoT devices and Linux servers in the cloud. This new arsenal of weapons is giving hackers a completely different level of power than they’ve had before.

Fortunately, software security solutions built around deep network analytics and big data techniques are also game changers. For those gaming companies that have employed them, they can meet the threats with confidence, for now, with the winning approach.

Naim Falandino is Chief Scientist at Nokia Deepfield with expertise in real-time analytics, machine learning, and information visualization.

Submit your PC game by December 14th for free testing and be entered to win an @intel​ CORE i9 PROCESSOR worth $1000, an iBUYPOWER​ Revolt 2 Pro Z370 worth $1750, or an ASRock​ X299.

ENTER & WIN: Submit your PC game to the Intel Game Dev Program

Apple Executive Reveals More of Its Self-Driving Technology

A theme emerged when Apple’s director of artificial intelligence research outlined results from several of the company’s recent AI projects on the sidelines of a major conference Friday. Each involved giving software capabilities needed for self-driving cars.

Ruslan Salakhutdinov addressed roughly 200 AI experts who had signed up for a free lunch and peek at how Apple uses machine learning, a technique for analyzing large stockpiles of data. He discussed projects using data from cameras and other sensors to spot cars and pedestrians on urban streets, navigate in unfamiliar spaces, and build detailed 3-D maps of cities.

The talk offered new insight into Apple’s secretive efforts around autonomous-vehicle technology. Apple received a permit from the California DMV to test self-driving vehicles in April, and CEO Tim Cook confirmed his interest in such technology in June.

The scale and scope of any car project at Apple remains unclear. Salakhutdinov didn’t say how the projects he discussed Friday fit into any wider effort in automated driving, and a company spokesman declined to elaborate.

Salakhutdinov showed data from one project previously disclosed in a research paper posted online last month. It trained software to identify pedestrians and cyclists using 3-D scanners called lidars used on most autonomous vehicles.

Other projects Salakhutdinov discussed don’t appear to have been previously disclosed. One created software that identifies cars, pedestrians, and the driveable parts of the road in images from a camera or multiple cameras mounted on a vehicle.

Salakhutdinov showed images demonstrating how the system performed well even when raindrops spattered the lens, and could infer the position of pedestrians on the sidewalk when they were partially screened by parked cars. He cited that last result as an example of recent improvements in machine learning for some tasks. “If you asked me five years ago, I would be very skeptical of saying ‘Yes you could do that,’” he said.

Another project Salakhutdinov discussed involved giving software moving through the world a kind of sense of direction, a technique called SLAM, for simultaneous localization and mapping. SLAM is used on robots and autonomous vehicles, and also has applications in map building and augmented reality. A fourth project used data collected by sensor-laden cars to generate rich 3-D maps with features like traffic lights and road markings. Most prototype autonomous vehicles need detailed digital maps in order to operate. Salakhutdinov also mentioned work on making decisions in dynamic situations, a topic illustrated on his slides with a diagram of a car plotting a path around a pedestrian.

Apple’s event took place toward the end of a week-long conference on machine learning called NIPS. Nearly 8,000 people attended, an increase of almost five times since 2012. There was a strong showing from recruiters—including Elon Musk—hoping to lure machine learning engineers, highly prized employees in short supply.

The AI talent shortage was a primary reason for Apple’s event Friday, which attracted people from top universities such as MIT and Stanford, and companies including Alphabet and Facebook. It also included talks from engineers about how machine learning is used inside Apple products such as the Siri personal assistant. Carlos Guestrin, Apple’s director of machine learning, and a professor at University of Washington, spoke about the powerful computer systems and large datasets available to machine-learning engineers who join the company. He won applause by announcing that Apple is open sourcing software to help app developers use machine learning first developed at his startup Turi, acquired by Apple last summer.

Friday’s event, and Salakhutdinov’s discussion of research results, show how Apple is being forced to relax its famed secrecy as it competes for talent with rivals such as Google. Salakhutdinov joined Apple in October 2016, although he retains a professorship at Carnegie Mellon University. Soon after, at last year’s NIPS conference, he announced that his researchers would be able to publish academic papers, like their counterparts at Facebook and Google. It was a seen as a savvy concession to the academic bent of AI experts even inside industry.

Apple’s AI thaw has proceeded slowly, though. A company spokesman pointed to five academic machine learning papers released since Salakhutdinov joined the company, but said that Apple doesn’t maintain a count of such publications. The company has also started sharing some of its work on a technical blog branded as the Apple Machine Learning Journal. By contrast, Alphabet’s AI research groups contributed to more than 60 accepted papers at NIPS this week alone. To keep pace, or get ahead, of competitors in AI, Apple may need to share more with them.