There are a couple of pieces of Google Nest thermostat news to bring you this weekend: support is ending for the older 1st and 2nd-gen models, while Google is going to stop selling the thermostat in the EU completely.
First, the end of support for the 1st-gen Nest thermostat (launched in 2011), and the 2nd-gen Nest thermostat (launched in 2012). Google says (via 9to5Google) that there will be no more software updates issued for these devices, from Saturday, October 25, 2025.
At that point you'll no longer be able to control the thermostats from your phone either, and Home/Away modes will stop working too. You'll still be able to adjust modes, schedules, temperatures, and settings on the actual devices, however.
To help soften the blow, Google is offering some upgrade discounts on the latest 4th-gen Nest thermostat for those who have older models: if you're in the US, you can get $130 off, which is almost half price.
No-go in Europe The Nest Thermostat E (Image credit: Google)The second bit of news here is that there will be no more new Nest thermostats sold in Europe going forward. The 3rd-gen model and the Nest Thermostat E, which launched in 2015 and 2017 respectively, are going to be all you can get hold of now.
"Heating systems in Europe are unique and have a variety of hardware and software requirements that make it challenging to build for the diverse set of homes," says the statement from Google.
That means the shiny new model, launched last year and described as sporting "a stunning design infused with AI" in our 4th-gen Nest thermostat review, isn't going to be available in Europe.
It's no surprise that consumers are wary of investing in smart home tech with discontinued devices and incompatible standards to deal with – even from the biggest companies in the business. It's an area where Google and others can do a lot better.
You might also likeWatching Netflix with captions on has long been a practice in my home, starting with my youngest, who, as a GenZ, explained how streaming with captions on enables the multi-tasking that is so much a part of their lives.
Back when I first learned about this fast-growing habit, I assumed closed captions or subtitles were solely for people with hearing challenges. I knew my child didn't have any, and when I asked why they were watching Netflix with captions on, they looked at me like I had bananas for arms and told me, "Everyone does it."
That hyperbole led to some research, discovery, and this post I wrote on Medium. For my daughter and their GenZ cohort, captions helped bridge the distracted divide between the phone in their hands and the best streaming content on their TV. I even spoke to mental health professionals who also noted this was becoming common practice.
As I wrote then, my child tried to explain how captions were more than just an aid to understanding:
“It helps me with my ADHD: I can focus on the words, I catch things I missed, and I never have to go back,” she replied. “And I can text while I watch.”
I got it, but I don't think I fully understood until I turned on my own Netflix closed captions. There were two shows, in particular, that made me a convert.
First, Call My Agent, a smart French comedy about a Paris-based talent firm. It ran four seasons, and we got hooked, even though we were reading the whole time.
The second show was Peaky Blinders. Now, this show is in English, but the accents from Birmingham, England, were so thick that when we first tried watching, we gave up halfway through the first episode because we couldn't understand a thing. A few years later, we returned but with captions enabled. That changed everything, and we became huge fans.
One thing, though, about Netflix subtitles has always bothered me: descriptions of sounds.
You get the idea. These audio cues are crucial for the hearing-impaired, but essentially unnecessary for those with full hearing, like me.
Even for GenZers like mine, I don't think the descriptions of these audio-only moments enhanced their viewing experience and, perhaps, were a bit of a distraction.
Netflix's decision to finally add an option for subtitles only is long overdue. The adjustment appears now as a new option under Audio: "English" subtitles, as opposed to "English (CC)".
(Image credit: Future)It's a small change, I know, but I'm certain my family and I will be using it from now on. At least some of us.
You see, while my youngest watches everything with captions, and my wife increasingly watches almost everything with captions, I still do it less so, and my son never does it and finds them distracting.
If I'm being honest, though, I'm finding that captions are useful in more situations than just foreign language and accent-heavy productions. I can no longer quite pick up what people are saying when they're speaking softly, whispering, or, as is often the case, mumbling.
Also, sound mixing often overplays sound effects and overwhelms the dialogue. In lieu of a better sound system, a clear caption is an effective solution. And now, without the extraneous text that tells me "loud explosion," this experience is about to get so much better.
So, thanks, Netflix, for always supporting the hearing-impaired and for now giving us new captioning devotees a sound-effect-free option. I'll be using it a lot.
You might also likeAlexandru Costin is Vice President, Generative AI and Sensei at Adobe.
There was no getting away from Firefly at this year’s Adobe Max London. Already infused across the Creative Cloud suite, the AI image and video generator has been massively upgraded with new tools and features.
Ahead of the events, we sat down with Alexandru Costin, Vice President, Generative AI and Sensei at Adobe, to explore what’s new with Firefly, why stories matter when using the best AI tools, and how professionals can use it to enhance creativity across the board.
At Max, we have the next generation of our image model, two versions of it. We have a vector model, we have the video model. So, a lot of progress on the model from Adobe, commercially safe, high quality, amazing human rendering. A lot of control and a great style engine, et cetera. We are also introducing third-party model integrations.
Our customers told us that they want to stay in our tools, in our workflows. They are still using other models for ideation purposes, or for different personalities. So, we’re announcing OpenAI's GPT image integration and Google's Imagen and Veo 2 in Firefly, and Flux integration in Firefly Boards.
The third big announcement is Firefly Boards is a new capability in the Firefly web application. We look at it as an all-in-one platform for next generation creatives to ideate, create and produce production content. Firefly Boards is an infinite canvas that enables team collaboration, real-time collaboration, commenting, but also deep Gen AI features stepping in, into all of these first-party and third-party models, new capabilities for remixing images.
It’s not easy. We've been working on the project concept for like, a year. Actually, that underlying technology, we've been working on for many years, like real-time collaboration with deep integration, with storage, and innovation in Gen AI user experiences, remixing, auto-describing images to create the prompts for you. There's a lot of deep technology that went into it. It looks like magic, and is very easy [to use]. We hope it's so easy. Our goal is to build a complex layer. So for customers, it's like magic, and everything just works.
My favorite feature is integration between image, video, and the rest of the Adobe products. We're trying to build workflows where customers that have an intent in mind, and they want to paint the picture that's in their mind, can use these tools in a really connected way without having to jump through so many hoops to tell their story. Firefly Image 4 offers amazing photo realism, human rendering quality, prompt understanding. You iterate fast.
With Image 4 Ultra, which is our premium model, you can render your image with additional details, and we can take them into the Firefly video model as a keyframe, and create a video from that whole image. Then you can take that video into Adobe Express and make it like an animated banner, add text, add fonts. In Creative Cloud, we have a lot of capabilities that exist already. We're bringing Gen AI inside those workflows, either in Firefly on the web, or directly as an API integration.
But for me, I think the magic is having all of this accessible in an easy way. The Photoshop team is also working on an agentic interface. They call it a new Actions panel. You type in what you want. We have 1000 high-quality actions we've curated for you. There are all these tools in Photoshop that are sometimes hard to discover if you're not an expert, but we're gonna just bring them and apply them for you. I mean, you will learn along the way, but you don't need to know everything before you start. Not only we're helping you achieve your goal, we're also teaching you the ins and outs of Photoshop as we go through this.
It is. It's too powerful to some extent. It has so many controls, it might be intimidating, but with the new Actions panel, we want to take a big chunk of that entry barrier away.
(Image credit: Adobe // Future)Everybody will benefit from this technology in different ways. For creative professionals, it will basically remove some of the tedium, so they can focus on creativity. But with things like Firefly Boards, they will be able to work with teams and clients much better. The client can upload in boards some stylistic ideas, and then you can take it and integrate it very fast in your professional workload.
For consumers, with people that want to spend seconds to create something, with Firefly, you just type in the prompt and we do it for you. It's a great capability.
In the middle, there are the folks learning in their careers, aspiring creative professionals, next generation creatives. And for them, we want to give them both Gen AI capabilities, but also a bridge towards the existing pixel-perfect tools that we have at Adobe. Because we think a mix of those two worlds is the best mix that next generation creatives need to be armed with.
For me, a big opportunity is better understanding of humans, like prompt understanding agentic, having a creative partner to bounce ideas off of. Another thing we're announcing is the [upcoming] Firefly mobile app. This is a companion app that can use many of the Firefly app capabilities, generate text, generate video, et cetera. But also, because it's on mobile, you have access to the camera, you have a microphone, there are many new opportunities to make these interactions easier. So, we're looking into that. We do think next generation creatives are a big target market for us because we want to give them the tools of the trade.
For us, customers are why we get up in the morning every day, they are telling us what they need, and they told us they want more quality, better humans, more control, better stylization. That's what's behind the image model updates. We just want to make them more usable in more workflows for actual production use-cases. Because our model is uniquely positioned to be safe for commercial use, we want customers to use it everywhere.
For video, video is also growing, and many of our customer-base doesn't know how to use the video product. So, making video creation more accessible is another great accelerant for creativity. We want to offer a larger population of people the tools to tap into video and be able to start achieving their goals there. While, of course, inside products like Premiere Pro, we're continuing to integrate deeper, more advanced features, like a couple of weeks ago at NAB, we launched Generative Extend. It won one of the awards. Gen Extend is a 4K extension, enabling professional videographers to basically extend clips so they don't have to reshoot.
What motivates us is helping our customers tell stories, better stories, more diverse stories, and be successful in their careers.
I think through human creativity and engineering, how do they differentiate today? They're all using Photoshop. They do find ways to differentiate because, in reality, Gen AI is a tool designed, at least from an Adobe perspective, to be of service to the creative community, and we want to give them a more powerful tool that should help them level-up their craft.
They're describing it as going from the person editing to a creative director. All of our customers can become directors of these Gen AI tools to help them tell better stories, tell stories faster, et cetera. So, we think the differentiation will still be in the creativity of the human using the tool. And we're seeing so much innovation. We're seeing people using these technologies in ways we haven't even thought about, which is very exciting, always. Mixing them in novel ways. Because that's how you differentiate. And we do think there will always be many ways to express somebody's creativity.
We think creativity comes in a variety of ways, and there are different tools creative people will use and mix together to tell better stories and change culture.
Explore the power of generative AI with Adobe Firefly
Integrated into almost every Adobe app, Firefly is tailor-made for creatives at every level - from professionals to consumers. Want to see how Adobe's generative AI can help you iterate your designs faster? Try out Firefly's tools by clicking here.
You might also likeEarlier this week, Xiaomi launched the Poco M7 Pro 5G in the UK, the latest entry in the sub-brand’s line of affordable handsets, and a device that brings some neat features to the table for its low £199 price tag.
For far less than even some of the best cheap phones, the M7 Pro 5G offers a 120Hz display, a 5,110mAh battery, and a 50MP Sony camera – nothing about what I’m about to say next suggests that this isn’t solid value for money.
However, Xiaomi has insisted on labeling some of the Poco M7 Pro 5G’s features and components as “flagship” – particularly its IP64 dust and water resistance rating. With respect to Xiaomi, which makes some of the best phones around (even if they’re a pain to get your hands on), that description is fresh out of 2012.
An IP64 rating is not, under any modern definition, flagship-grade for a phone. The most recent true flagships – like the OnePlus 13 – carry IP69 ratings, which promise resistance from powerful jets of heated water and total dust resistance.
In fairness, an IP64-rated phone is still dust-sealed, but that standard only protects against splashes of water with some ingress allowed. That just doesn’t match up against the iPhone 16 Pro Max, Samsung Galaxy S25 Ultra, and Google Pixel 9 Pros of the world, all of which can take a dip in fresh water and emerge unscathed thanks to their IP68 ratings (though we'd still never recommend testing this claim for yourself!).
Furthermore, press materials seen by TechRadar describe the 2x digital zoom function of the M7 Pro 5G's camera system as "flagship-level", and while flagships do utilize in-sensor cropping, the M7 Pro 5G is unlikely to keep up with its relatively smaller 1/1.95-inch sensor. For reference, the iPhone 16 sports a 1/1.56-inch sensor, while its Android rival, the Google Pixel 9, boasts a 1/1.31-inch sensor.
And even if the M7 Pro 5G's 2x digital zoom does somehow match its more expensive rivals, this feature is not described as "flagship-level" on the phone's official web page. That makes me think Xiaomi is either confused about its own product or seeking to influence coverage with terms it won't use in public. Either way, that's an issue.
In fact, the only aspect of the Poco M7 Pro 5G I’d call “flagship” quality, at least without having tested one myself, is its 5,110mAh battery – and yet Xiaomi doesn’t call it so.
The Oppo Reno 12 FS. Nice looking? You bet. A flagship camera phone? Not a chance. (Image credit: Future)It's not just Xiaomi doing this, either. Oppo's UK website describes the Oppo Reno 12 FS camera system as a "flagship camera combo", and as my full Oppo Reno 12 FS review details, that's flat-out untrue.
In fact, the Oppo Reno 12 FS 5G (which otherwise boasts great value for money and serviceable performance) sports a 50MP main camera, an 8MP ultrawide camera, and a 2MP macro camera. If the iPhone 17 Pro Max launches with a 2MP lens in tow, I'll happily give Oppo a retroactive pass for this, but until then, that's simply not flagship-grade hardware.
Smartphone semantics What even is a flagship these days? There are two phones above the iPhone 16 (pictured) in Apple's mobile portfolio (Image credit: Future)What we’re witnessing is a peculiar attempted transformation of language. Flagship was once a literal term, meaning the best phone a company has to offer, but, as I’ve previously discussed, the term has become more vague as companies like Apple and Samsung develop flagship lineups comprising several distinct but related models.
What companies like Xiaomi are attempting to do is push the term one step further into the abstract; to change the meaning of the word “flagship” to one that simply connotes ideas of better performance and higher status, rather than a title given to certain devices by phone makers to reflect the expectations of consumers.
In the plainest terms, these companies would like control of the “flagship” narrative to get you to think better of their mid-range and budget phones.
That’s not necessarily as ominous as it sounds – modern tech marketing relies on imaginative storytelling that highlights the position of devices in our lives. Just look at the real-life stories that opened the September 2024 Apple Event. I’ve no problem with phone makers calling their devices essential, or innovative, or brilliant, because most of the time there’s a good bit of truth to these claims.
However, when it comes to the term “flagship”, it’s important that brand messaging aligns with user expectations, so that customers aren’t misled. Flagship phones are typically big sellers and a big draw for users, so it’s crucial that customers who may not know too much about tech specs aren’t drawn to products that won’t live up to their needs.
The new Poco M7 Pro 5G is a budget phone, through and through – and there’s no shame in that. As much as my magpie-coded brain loves a shiny new flagship, I recommend the Samsung Galaxy A36 to most people I know as they simply don’t care about the latest and greatest specs – I’m sure the Poco M7 Pro will find its own audience of savvy customers, too.
But for the buyer who just wants the latest and greatest phone, and is willing to spend up to $1,200 to get that, the least phone makers can do is keep the term “flagship” to its current definition.
You might also likeA new Quordle puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Saturday's puzzle instead then click here: Quordle hints and answers for Saturday, April 26 (game #1188).
Quordle was one of the original Wordle alternatives and is still going strong now more than 1,100 games later. It offers a genuine challenge, though, so read on if you need some Quordle hints today – or scroll down further for the answers.
Enjoy playing word games? You can also check out my NYT Connections today and NYT Strands today pages for hints and answers for those puzzles, while Marc's Wordle today column covers the original viral word game.
SPOILER WARNING: Information about Quordle today is below, so don't read on if you don't want to know the answers.
Quordle today (game #1189) - hint #1 - Vowels How many different vowels are in Quordle today?• The number of different vowels in Quordle today is 4*.
* Note that by vowel we mean the five standard vowels (A, E, I, O, U), not Y (which is sometimes counted as a vowel too).
Quordle today (game #1189) - hint #2 - repeated letters Do any of today's Quordle answers contain repeated letters?• The number of Quordle answers containing a repeated letter today is 1.
Quordle today (game #1189) - hint #3 - uncommon letters Do the letters Q, Z, X or J appear in Quordle today?• No. None of Q, Z, X or J appear among today's Quordle answers.
Quordle today (game #1189) - hint #4 - starting letters (1) Do any of today's Quordle puzzles start with the same letter?• The number of today's Quordle answers starting with the same letter is 2.
If you just want to know the answers at this stage, simply scroll down. If you're not ready yet then here's one more clue to make things a lot easier:
Quordle today (game #1189) - hint #5 - starting letters (2) What letters do today's Quordle answers start with?• P
• Y
• C
• C
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.
Quordle today (game #1189) - the answers (Image credit: Merriam-Webster)The answers to today's Quordle, game #1189, are…
A head scratcher, but lengthy thinking time aside I managed to get through the run without any wrong guesses.
My good fortune was using a start word that began with a C. Without that headstart I would have been in trouble.
How did you do today? Let me know in the comments below.
Daily Sequence today (game #1189) - the answers (Image credit: Merriam-Webster)The answers to today's Quordle Daily Sequence, game #1189, are…
A new NYT Connections puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Saturday's puzzle instead then click here: NYT Connections hints and answers for Saturday, April 26 (game #685).
Good morning! Let's play Connections, the NYT's clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need Connections hints.
What should you do once you've finished? Why, play some more word games of course. I've also got daily Strands hints and answers and Quordle hints and answers articles if you need help for those too, while Marc's Wordle today page covers the original viral word game.
SPOILER WARNING: Information about NYT Connections today is below, so don't read on if you don't want to know the answers.
NYT Connections today (game #686) - today's words (Image credit: New York Times)Today's NYT Connections words are…
What are some clues for today's NYT Connections groups?
Need more clues?
We're firmly in spoiler territory now, but read on if you want to know what the four theme answers are for today's NYT Connections puzzles…
NYT Connections today (game #686) - hint #2 - group answersWhat are the answers for today's NYT Connections groups?
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.
NYT Connections today (game #686) - the answers (Image credit: New York Times)The answers to today's Connections, game #686, are…
I regularly use a calculator, have copious amounts of hair on my head, love a salty snack, and regularly shop at Ikea, yet I found today’s puzzle utterly baffling.
CALCULATOR BUTTONS I got straight away, but then came the collapse. First, I thought that there was a group of Victorian authors that I knew nothing about – so I linked LOCK, THATCH, TAKI and TUFT.
I got the "one away!" alert, but still didn’t think about hair and instead persisted with my literature hunch and swapped THATCH for RUFFLE.
After finally getting AMOUNTS OF HAIR I still faltered with just two groups to get – first thinking there was something about bowls. In my defense, cultural difference again thwarted me, as the majority of the products referenced as a SALTY SNACK UNIT, as well as SWEDISH FISH, are rare delicacies in the UK.
How did you do today? Let me know in the comments below.
Yesterday's NYT Connections answers (Saturday, April 26, game #685)NYT Connections is one of several increasingly popular word games made by the New York Times. It challenges you to find groups of four items that share something in common, and each group has a different difficulty level: green is easy, yellow a little harder, blue often quite tough and purple usually very difficult.
On the plus side, you don't technically need to solve the final one, as you'll be able to answer that one by a process of elimination. What's more, you can make up to four mistakes, which gives you a little bit of breathing room.
It's a little more involved than something like Wordle, however, and there are plenty of opportunities for the game to trip you up with tricks. For instance, watch out for homophones and other word games that could disguise the answers.
It's playable for free via the NYT Games site on desktop or mobile.
A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Saturday's puzzle instead then click here: NYT Strands hints and answers for Saturday, April 26 (game #419).
Strands is the NYT's latest word game after the likes of Wordle, Spelling Bee and Connections – and it's great fun. It can be difficult, though, so read on for my Strands hints.
Want more word-based fun? Then check out my NYT Connections today and Quordle today pages for hints and answers for those games, and Marc's Wordle today page for the original viral word game.
SPOILER WARNING: Information about NYT Strands today is below, so don't read on if you don't want to know the answers.
NYT Strands today (game #420) - hint #1 - today's theme What is the theme of today's NYT Strands?• Today's NYT Strands theme is… Sleep tight
NYT Strands today (game #420) - hint #2 - clue wordsPlay any of these words to unlock the in-game hints system.
• Spangram has 7 letters
NYT Strands today (game #420) - hint #4 - spangram position What are two sides of the board that today's spangram touches?First side: left, 5th row
Last side: right, 6th row
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.
NYT Strands today (game #420) - the answers (Image credit: New York Times)The answers to today's Strands, game #420, are…
My search started off well after quickly getting MASK and EARPLUGS, but then took a devious turn.
Even after getting a hint and the many letters of MELATONIN I struggled connecting the letters, which became the story of all the other words.
You could argue that the theme and spangram are inaccurate and what we were actually looking for today were sleep aids for those among us who struggle to fall asleep without assistance.
I count myself among a growing demographic trying desperately to improve their sleep quality. I actually got measured for my pillow, have a yearly app subscription just so I can listen to the same 10-minute MEDITATION every night, and have earplugs, mask, and aromatherapy spray all at hand should I struggle to reach the land of nod.
How did you do today? Let me know in the comments below.
Yesterday's NYT Strands answers (Saturday, April 26, game #419)Strands is the NYT's not-so-new-any-more word game, following Wordle and Connections. It's now a fully fledged member of the NYT's games stable that has been running for a year and which can be played on the NYT Games site on desktop or mobile.
I've got a full guide to how to play NYT Strands, complete with tips for solving it, so check that out if you're struggling to beat it each day.
In April 1985, a small team at Acorn Computers in Cambridge, UK, set out to rethink what a processor could be. Engineers Sophie Wilson and Steve Furber developed the ARM1 (it originally stood for Advanced RISC Machines), an unassuming chip with just 25,000 transistors, to power the BBC Micro, crafting a 32-bit processor that emphasized reduced instruction sets for faster, more efficient computation.
The design's low power consumption was partially driven by practical constraints, namely the need to run in cheaper plastic packaging. ARM2 soon followed, incorporated into the Acorn Archimedes, the first RISC-based home computer. ARM3 introduced a 4KB cache and further improved performance.
After the spin-off from Acorn in 1990, ARM Ltd. was founded as a joint venture between Acorn, Apple, and VLSI. One early commercial success was the Apple Newton, followed by widespread adoption in mobile phones like the Nokia 6110, which featured the ARM7TDMI.
(Image credit: Arm) Looking to the futureARM6, introduced in 1991, brought full 32-bit processing and an MMU, key to powering GSM mobile phones. In 2005, Armv7 architecture debuted with the Cortex-A8 processor, which brought SIMD (NEON) support and powered many early smartphones.
In 2011, Armv8 introduced 64-bit support and became the foundation for cloud, data center, mobile, and automotive computing. Features like SVE and Helium pushed performance and AI capabilities further.
The 2021 launch of Armv9 marked the architecture's shift into AI-centric workloads. It introduced Scalable Vector Extension 2 (SVE2), Scalable Matrix Extension (SME), and Confidential Compute Architecture (CCA).
These features made it suitable for everything from smartphones with advanced image processing to AI servers handling generative workloads. SME accelerates generative AI and MoE models, while SVE2 brings enhanced AI capability to general-purpose compute.
Arm's compute subsystems (CSS), based on Armv9, now serve client, infrastructure, and automotive markets. By integrating CPUs, interconnects, and memory interfaces, these CSS platforms support rapid development of specialized silicon.
From the original ARM1 with just 25,000 transistors to today’s Armv9 CPUs packing 100 million gates, the architecture has consistently driven computing forward for four decades. Arm-based chips now power over 300 billion devices worldwide, from tiny embedded sensors to full-scale data centers.
With 99% of smartphones running on Arm and growing adoption in IoT, cloud, and AI workloads, the architecture continues to scale thanks to its energy-efficient design and flexible licensing model.
Looking ahead, there have been growing rumors that Arm could move beyond licensing and into chip production, something that would put it in competition with its biggest customers. This speculation intensified recently following the acquisition of Ampere Computing, Arm’s only independent server chip vendor, by SoftBank, Arm’s Japanese owner.
You might also likeSamsung has confirmed that it's working on a tri-fold foldable phone, and while we don't have too many details about it yet, the latest leak around the handset gives us some more information about the screen size.
This tip comes from well-known leaker Digital Chat Station (via Notebookcheck), who says that we're looking at a main screen size of around 9.9 inches. That's a little smaller than the 10.2-inch display sported by the Huawei Mate XT tri-fold.
It also lines up rather neatly with previous rumors around this Samsung device: those rumors have predicted a main screen size of 9.96 inches and an outer screen size of 6.49 inches, which also indicates a key difference from the Huawei Mate XT.
Whereas the three panels of the Mate XT fold back on each other, leaving a third of the screen visible when it's closed, the Samsung tri-fold is expected to fold inwards – so all of the main display gets covered up when it's shut, and a second display is needed.
More leaks and rumors The Galaxy Z Fold 6 should get a successor this year (Image credit: Future)This same leak suggests that the Samsung tri-fold will be launching this year. It may show up sometime in July, which is when the Samsung Galaxy Z Fold 7 and Samsung Galaxy Z Flip 7 are expected to be unveiled.
Other whispers we've heard around this Samsung tri-fold are that it'll offer 2,600 nits of brightness on its screens, which is a very decent figure and matches up with what's already offered by the Samsung Galaxy Z Fold 6.
There's also been talk that the tri-fold might end up being called the Samsung Galaxy G Fold. That's by no means official yet, but that moniker would fit in neatly with the other foldable phones that Samsung already manufactures.
It's going to be interesting to see how Samsung prices this phone. Obviously, it's going to have to cost a lot because of the tech, but we're hoping that it's not prohibitively expensive – and that it goes on sale worldwide.
You might also likePhison has set a new benchmark in enterprise storage performance with its Pascari X200E 6.4TB SSD, breaking records in sequential read speed, well beyond what even the fastest external hard drives can deliver.
TweakTown lab tests found the drive achieved a sequential throughput of 15,025MB/s, the highest ever recorded. In the 8K 70/30 test, which simulates database traffic, the X200E also became the first flash-based SSD to surpass 1 million IOPS.
The X200E is part of Phison’s Pascari Performance X-Series, designed specifically for extreme write intensity in data-heavy environments. It ships in U.2 and E3.S form factors, with capacity options ranging from 1.6TB to 30.72TB.
Enterprise DNA means enterprise demandsBuilt around the 16-channel Phison PS5302-X2-66 controller and equipped with Hynix 176-layer eTLC NAND, the X200E runs on a PCIe Gen5 x4 interface. Most desktop PCs support M.2 SSDs rather than the enterprise-grade U.2 interface, making them physically and technically incompatible with the X200E.
Even with an adapter, most consumer motherboards lack the queue depth and thermal management required to take full advantage of the hard drive's capabilities. Given these requirements, the X200E isn’t designed for typical users, it’s built for data centers, not desktops or gaming rigs.
Phison rates the X200E at up to 14,800MB/s sequential read and 8,700MB/s sequential write performance. In addition to raw speed, the drive excels in mixed workload scenarios, delivering up to 3.2 million IOPS with consistent performance across multiple queue depths, further underscoring its enterprise focus.
The X200E is engineered to support modern AI workloads and hyperscale data center operations, which often demand performance beyond the traditional queue depth of 32 used in legacy SSD benchmarks. Test results show the drive maintains steady-state performance even under random workloads with queue depths as high as 4096.
As AI models continue to generate massive volumes of reads and writes across complex workflows, SSDs like the X200E will help power everything from video delivery platforms to real-time analytics pipelines.
You may also likeNew research from Okta has revealed that hackers from the Democratic People’s Republic of Korea (DPRK), are using generative AI in its malicious interview campaign - a series of tactics that involve gaining employment in remote technical roles in western firms, usually in industries with sensitive security data like defense, aerospace, or engineering.
This isn’t the first time North Korean fake job hackers have gone the extra mile with their campaigns, but the new research has found that GenAI is playing an integral role in the employment schemes.
The AI models are used to “create compelling personas at numerous stages of the job application and interview process” and then, once hired, GenAI is again used to assist in maintaining multiple roles, all earning revenue for the state.
Keeper generates and stores strong passwords so you never have to remember them again. Don’t let one weak password leave you exposed.
Preferred partner (What does this mean?)View Deal
Malicious interviewAI was used by these hackers in a number of ways, including generating CVs and cover letters, conducting mock interviews via chat and webcam, translating, translating, and summarising messages, as well as managing communications for multiple jobs from different accounts and services.
To assist, the hackers have a sophisticated network of ‘facilitators’ that provide in-country support, technical infrastructure, and “legitimate business cover” - helping the North Koreans with domestic addresses, legitimate documents, and support during the recruitment process.
The campaign is growing ever more sophisticated, especially given that hackers are now using both sides of the job seeking process, targeting job seekers with fake interviews, in which they deliver malware and infostealers.
These elaborate schemes often start on legitimate platforms like LinkedIn or Upwork - with the attackers reaching out to victims to discuss potential opportunities. Anyone on the job hunt or in the hiring process should be extra vigilant about who they are speaking to, and should be careful not to download any unfamiliar software.
You might also likeWhatsApp has defended the wider rollout of its Meta AI assistant inside the popular messaging app, despite some significant pushback from users.
Earlier this month, Meta rolled out the AI assistant – represented by a blue ring in the bottom-right corner of your WhatsApp chats – across several new countries in the EU, the UK, and Australia.
Because WhatsApp is very popular in those regions – more so than the likes of Apple's iMessage – there was a vocal backlash to its arrival on platforms like Reddit, particularly as it isn't possible to turn the feature off. But WhatsApp has now commented on those concerns for the first time.
In a statement to the BBC, WhatsApp said: "We think giving people these options is a good thing and we're always listening to feedback from our users". It added that it considers the feature to be similar to other permanent features in the app, like 'channels'.
Although the Meta AI circle hovers permanently in your chats section, it doesn't actually have access to your chats. Meta's Help pages state that "your personal messages with friends and family are off limits", while the Meta AI chat window states that "it can only read messages people share with it".
Still, some privacy concerns remain, so this week WhatsApp introduced a new feature called "Advanced Chat Privacy" to help soothe any remaining concerns.
A privacy peace offering (Image credit: WhatsApp)While it isn't possible to turn off Meta AI in WhatsApp (it's also now integrated into the app's search bar), you will soon be able to use "Advanced Chat Privacy" to prevent others from using your chats in other AI apps.
The new setting, which is "rolling out to everyone on the latest version of WhatsApp", is designed to stop people from taking anything you share in WhatsApp outside of chats and groups. When it's turned on, your friends and contacts are blocked from "exporting chats, auto-downloading media to their phone, and using messages for AI features".
We haven't yet seen the feature in action, but you'll be able to turn it on by tapping on a chat name, then tapping the new "Advanced Chat Privacy" option. WhatsApp says this is also just the first version of the feature, with more protections en route to help you avoid a personal Signalgate fiasco.
That's likely to be a more popular move than baking Meta AI in WhatsApp, although a recent poll on the TechRadar WhatsApp channel shows the latter hasn't been universally condemned.
While the biggest chunk of our poll respondents (42%) said they would "never" use the Meta AI assistant in WhatsApp, a significant number (41%) said they would "maybe, sometimes" tap the blue ring, while 17% said they planned to use Meta's ChatGPT equivalent "regularly". Perhaps, like the prison walls in The Shawshank Redemption, we'll one day grow to depend on it.
You might also likeWe now know how many people are affected by a recent ransomware attack on Frederick Health Medical Group - almost a million.
The healthcare provider reported the new figures to the US Department of Health and Human Services (HHS), noting how on January 27, 2025, it experienced a “ransomware event” on its IT systems.
The information taken varies from person to person, Frederick Health Medical Group added, and while in the notice it does not discuss the number of affected individuals, it did share a figure with the US HHS - 934,326 individuals.
Keeper is a cybersecurity platform primarily known for its password manager and digital vault, designed to help individuals, families, and businesses securely store and manage passwords, sensitive files, and other private data.
It uses zero-knowledge encryption and offers features like two-factor authentication, dark web monitoring, secure file storage, and breach alerts to protect against cyber threats.
Preferred partner (What does this mean?)View Deal
Second increaseThe subsequent investigation determined that the threat actors managed to steal certain files from a file share server.
These files included patient names, addresses, dates of birth, Social Security numbers, driver’s license numbers, medical record numbers, health insurance information, and/or clinical information related to patient care.
So far, no threat actors have assumed responsibility for the attack, and the data has not yet surfaced on the dark web, possibly suggesting Frederick Health actually paid the ransom demand.
The organization has roughly 4,000 employees and more than 25 locations. To mitigate the risk of the attack, it also offered all affected individuals free credit monitoring and identity theft protection services through IDX.
Healthcare organizations are a prime target for ransomware operators, given the sensitivity of the data they operate with. In April 2025 alone, we've had stories of a cybersecurity CEO who tried to install malware on hospital computers, attacks on Yale Health and DaVita, and the data leak at Logezy.
Furthermore, Blue Shield of California also recently disclosed a data breach that exposed sensitive data of 4.7 million members.
Via BleepingComputer
You might also likeCybersecurity researchers from ARMO recently discovered a security oversight in Linux which allows rootkits to bypass enterprise security solutions and run stealthily on affected endpoints.
The oversight happens because the ‘io_uring’ Kernel interface is being ignored by security monitoring tools. Built as a faster, more efficient way for Linux systems to talk to storage devices, io_uring helps modern computers handle lots of information without getting bogged down. It was introduced back in 2019, with the release of Linux 5.1.
Apparently, most security tools look for shady syscalls and hooking white completely ignoring anything involving io_uring. Since the interface supports numerous operations through 61 ops types, it creates a dangerous blindspot that can be exploited for malicious purposes. Among other things, the supported operations include read/writes, creating and accepting network connections, modifying file permissions, and more.
According to BleepingComputer, the risk is so great that Google turned it off by default both in Android and ChromeOS, which use the Linux kernel.
Keeper is a cybersecurity platform primarily known for its password manager and digital vault, designed to help individuals, families, and businesses securely store and manage passwords, sensitive files, and other private data.
It uses zero-knowledge encryption and offers features like two-factor authentication, dark web monitoring, secure file storage, and breach alerts to protect against cyber threats.
Preferred partner (What does this mean?)View Deal
Second increaseTo demonstrate the flaw, ARMO built a proof-of-concept (PoC) rootkit called “Curing”. It can pull instructions from a remote server and run arbitrary commands without triggering syscall hooks. They then tested it against popular runtime security tools, and determined that most of them couldn’t detect it.
The researchers claim Falco was completely oblivious to Curing, while Tetragon couldn’t flag it under default configurations. However, the latter’s devs told the researchers they don’t consider the platform vulnerable since monitoring can be enabled to detect the rootkit.
"We reported this to the Tetragon team and their response was that from their perspective Tetragon is not "vulnerable" as they provide the flexibility to hook basically anywhere," they said. "They pointed out a good blog post they wrote about the subject."
ARMO also said they tested the tool against unnamed commercial programs and confirmed that io_uring-abusing malware was not being detected. Curing is now available for free on GitHub.
Via BleepingComputer
You might also likeIt looks like Signify – the company behind Philips Hue – is preparing to launch a new smart button to let you control your lights with a tap from anywhere in your house.
The news comes from Fabian of Hueblog.com, who spotted a listing for a new device from Signify on the website of the Federal Communications Commission (FCC). All devices capable of sending radio signals have to be registered with the FCC before they can be sold in the US, so it's often a good source of early info on products that'll be hitting the shelves soon.
Although there are no photos, we can glean several details from the FCC filing. The product is classified as a 'digital transmission system', and supporting documents (including the location of its FCC approval label and its testing report) reveal that it will be a small, circular device using Zigbee technology, with specifications very similar to the original Hue Smart Button.
"There will be no functional differences to the previous model and the form factor will basically remain the same," concludes Fabian. "However, the second-generation Hue Smart Button will be slightly larger and more angular, and the overall design will be a little more sophisticated."
What to expectWe're big fans of the original Philips Hue Smart Button here at TechRadar. When he reviewed it back in 2023, our reviewer Alistair Charlton appreciated how easy it is to install using either its wall-mounting plate or small adhesive disc. Whichever one you choose, the button itself just snaps into place magnetically, and can be removed and used as a remote whenever you like.
The Smart Button can perform two functions of your choice – one when it's pressed once, and another when it's pressed and held. It's much simpler than the Philips Hue Tap Dial Switch, which works as a dimmer with four programmable buttons in the center, but the Smart Button is a convenient and affordable way to operate your smart lights without using voice commands or an app.
What interests me is the timing. We know that Signify will soon be introducing an AI assistant for Philips Hue lights, which will use generative AI to create custom lighting schemes. It will be interesting to see whether we can use this assistant to program the button, or tap the button to cycle through AI-generated scene options.
Hopefully it will come in black as well. The Tap Dial Switch is available in a choice of colors, and a darker option would make the tiny button an even more discreet way to operate your lights.
You might also likeNvidia is on the verge of completing its RTX 5000 desktop GPU series launch, with the RTX 5060 on the horizon after its Ti counterpart launched earlier in April - and it's good news for budget gamers... well, sort of.
According to VideoCardz, the RTX 5060 is set to launch on May 19 at $299 (around £220 / AU$470), at the same price as its predecessor, the RTX 4060. It will utilize 8GB of VRAM, but is anticipated to take a decent performance leap over the last-gen card, using GDDR7 VRAM instead of GDDR6.
Its older brother, the RTX 5060 Ti, has both 8GB and 16GB models with the latter being an easy choice for most PC-builders in terms of gaming performance: 8GB of VRAM is much less desirable for running modern games, as plenty of triple-A titles require more VRAM. While VRAM isn't always the quintessential element when it comes to performance, it becomes a bigger factor for lower-end GPUs.
This may be one reason that sways potential consumers from buying the RTX 5060, but its price could be the main reason why. The rumored $299 launch price is certainly appealing for a budget GPU, but the trend of the GPU market suggests partner cards sold by retailers will likely cost more.
Just like the 5060 Ti (and the RTX 5070 Ti), if the RTX 5060 doesn't have a Founders Edition option, then consumers will yet again be left at the hands of retailers with third-party cards - and if you've kept a close eye on GPU prices lately, that's not good at all.
(Image credit: Nvidia) If prices are inflated for partner cards, then just forget about it...It's bad enough that GPUs like the RTX 5070 Ti or the RTX 5080 have inflated pricing across multiple retailers, but at the very least, these are powerful cards capable of 4K gaming. The RTX 5060, unsurprisingly, isn't a powerhouse GPU: it's expected to be Nvidia's lowest-tier GPU if the RTX 5050 rumors aren't legitimate (at least for the desktop PC space, anyway), so it has no business costing more than the purported $299.
However, the state of the GPU market gives us a clear answer: third-party RTX 5060 cards will more than likely cost more than $299, and I think that will instantly destroy anything good it could potentially have going for it. Gamers are already unwilling to pay more for more powerful hardware, so I'd find it hard to imagine budget gamers will accept any price inflation with this GPU.
Let's not forget that it's only got 8GB of VRAM, which I must stress again is no longer acceptable for gaming in 2025. Games are becoming more demanding, and we're continuously getting PC ports that are poorly optimized, so it's safe to say 8GB won't cut it anymore.
The only hope I do have for the RTX 5060 is that there actually is a Founders Edition model, and that there's a good level of availability (particularly since rumors hint that Nvidia is bolstering stock). If not, it's hard to see where it will succeed...
You may also like...Motorola has revealed its newest smartwatch, the Moto Watch Fit. Sporting a squircle design similar to the Apple Watch, the Moto Watch Fit is a slim, lightweight fitness tracker that works with all Android phones, with a large 1.9-inch AMOLED screen (topped with tough Gorilla glass).
However, unlike the Apple Watch, it can reportedly run for up to 16 days on a single charge – more than 20 times the battery life of your standard Apple Watch Series 10.
While the best Apple Watch, the Apple Watch Ultra 2, tops out at 36 hours of battery life, most Apple Watches only last for 18 hours. This is largely due to the watch running the power-hungry watchOS operating system. They also pack goodies the Watch Fit doesn't, such as a speaker and microphone for taking calls and playing alarm sounds.
We don't yet know whether the Moto Watch Fit will run Wear OS 5 like a true Android smartwatch, but from this first look, I doubt it. I imagine it will be a low-power alternative more like the best fitness trackers, which tend to last about a week. Sixteen days is more like Garmin watch territory; a very impressive achievement.
While it looks similar to an Apple Watch, there are a few other differences aside from the missing speaker and mic. For one thing, there is no digital crown, as Motorola has opted to include a tactile side button instead, like a Fitbit Versa model.
The watch does have onboard GPS for tracking workouts such as running, walking and cycling. There's 5ATM water resistance for swims, and an aluminum frame with a plastic back to save on weight (it's only 25g). Its price and release date are unknown, but based on the specs, we imagine it'll be around the price of an Apple Watch SE 2.
Hands-on thoughts (Image credit: Philip Berne / Future)Our US mobile editor Philip Berne got some brief hands-on time with the Moto Watch Fit. He said the following:
"The Motorola Moto Watch Fit was a surprise launch alongside the latest Motorola Razr phones, and it seems decidedly more focused on fitness tracking than smartwatch features.
"It looks and feels like a slimmer Apple Watch, with its squircle shape and square display, and the interchangeable watch bands even look suspiciously like Apple's watch band design. I was surprised that the Moto Watch Fit lacks speakers, so it won't be able to play alarm sounds.
"Still, it lays nice and flat on my arm, it feels very lightweight, and it's durable enough to keep up with any activity you'd throw at a normal smartwatch, including 5ATM of water resistance. Motorola is claiming the Moto Watch Fit will deliver 16 days of battery life, so maybe cutting all those features will have a real benefit for fitness fans."
You might also like...Microsoft is revealed it is now prepared to pay up to $30,000 in bounty to people who discover AI vulnerabilities in its Dynamics 365 and Power Platinum.
The company recently updated its bounty program with the new information.
"We invite individuals or organizations to identify security vulnerabilities in targeted Dynamics 365 and Power Platform applications and share them with our team. Qualified submissions are eligible for bounty rewards of $500 to $30,000 USD," the company said.
Keeper is a cybersecurity platform primarily known for its password manager and digital vault, designed to help individuals, families, and businesses securely store and manage passwords, sensitive files, and other private data.
It uses zero-knowledge encryption and offers features like two-factor authentication, dark web monitoring, secure file storage, and breach alerts to protect against cyber threats.
Preferred partner (What does this mean?)View Deal
Second increaseMicrosoft is willing to shell out for inference manipulation flaws, model manipulation, and inferential information disclosure. The vulnerabilities need to be either important or critical in their severity.
"To be eligible for AI Bounty Awards, such vulnerability must be Critical or Important severity as defined in the Microsoft Vulnerability Severity Classification for AI Systems and reproducible on a product or service listed in the In Scope Services and Products."
Dynamics 365 is a cloud-based suite of integrated business applications that combines CRM and ERP capabilities, while Power Platform is a low-code development suite that enables users to analyze data, build apps, automate workflows, and create chatbots using Power BI, Power Apps, Power Automate, and Power Virtual Agents.
If $30,000 doesn’t seem like a lot of money for such vulnerabilities, it’s perhaps worth mentioning that Microsoft is also willing to pay more, depending on the impact and the severity of the reported vulnerabilities, as well as the quality of the submission.
This is the second time in 2025 Microsoft has been increasing bounty rewards.
In mid-February 2025, the company announced it was ‘enhancing security and incentivizing innovation’ by updating its Copilot (AI) bug bounty program and raising the reward to $5,000.
Bug bounties are used by software firms in collaboration with security researchers to root out vulnerabilities that could otherwise be exploited by threat actors - and Microsoft even runs its own Black-hat like event with up to $4 million in potential awards for cloud and AI flaws.
Via BleepingComputer
You might also likeYou can buy a lot of cool tech with $300/£250. A pair of AirPods Pro 2, with change to spare, for example. Or two-thirds of a Nintendo Switch 2 pre-order. Or about 50 games in the next Steam sale. Or you could save it and put it towards an OLED TV.
Either way, we're sure you'd find something exciting to spend it on – and that sum could be yours if you simply tell us what you think of TechRadar.
Seriously, that's it – just click the link below, answer a few brief questions, and you'll get the chance to put your name into the pot for a chance to win a $300 / £250 Amazon voucher. The whole thing will only take a few minutes – but it could lead to many hours of tech joy.
Click here to take the TechRadar survey
The survey closes on Wednesday, April 30, and the optional prize draw is entered by submitting your email address once you've completed it. You must be a resident of the US or UK and at least 18 years old to be eligible to win (with some exceptions listed on the survey page). More terms and conditions here.
If you're not eligible for the prize draw, we still want to know what you think, and you're welcome to fill out the survey. Good luck!
The business landscape is shifting, and speed is no longer a competitive advantage but a necessity for survival. Technology is advancing faster than ever pushing businesses to adapt quickly while remaining flexible to the evolving market demands.
Transformation timelines that once spanned three to five years are a thing of the past, as market demands, and competitive pressures push businesses to deliver tangible results in under 12 months. To meet rising demands accelerating time to market, and boosting operational efficiency, organizations must rethink their transformation strategies.
Generative AI is at the heart of this shift, raising the bar with an increased rate of adoption. Companies have already incorporated AI into their processes and according to McKinsey & Co, 92 % of companies plan to increase their AI investments further, in the next three years. Businesses embracing gen AI gain a powerful solution that enhances developer productivity, helps tech teams to adapt and enables them to deliver smarter, faster solutions that drive business impact like never before.
The end of long digital transformation cyclesDespite the market demands for quicker turnaround times, many businesses have long relied on legacy systems and siloed data, making outdated technology and processes a major roadblock to scalability and progress. The issue is that too often IT teams remain accustomed to traditional methods and struggle to adapt to change. These are some of the factors that lead to prolonged digital transformation cycles and hinder innovation.
Another challenge is that, despite accelerating rates of AI adoption, 52% of projects fail to make it to production, with the average prototype to production time taking eight months, according to Gartner. The lengthy production timelines stall progress, making it difficult for businesses to adapt, innovate, and stay competitive in a fast-moving market.
Whatever the reason may be, businesses can no longer afford slow progress – they must embrace change and look for ways to innovate faster to maintain competitive edge. In the face of shrinking timelines, AI has emerged as the accelerant that businesses need - to meet the demand for both speed and efficiency.
AI: The catalyst for business transformationIn the past year, the AI hype has quickly shifted from a theoretical concept to a real-world solution which is transforming how software is built and delivered. A collaborative report from OutSystems and KPMG shows that 93% of executives are planning to boost AI investments. Generative AI unlocks unprecedented capabilities, streamlining software development through automated code generation, rapid prototyping and code translation from one programming language to another.
As businesses embrace these advancements, the role of generative AI extends beyond efficiency, and it becomes a driving force for digital transformation.
The use of generative AI in software development can help businesses accelerate digital transformation by drastically reducing the development time and costs. This shift not only redefines traditional standards but also acts as a catalyst for whole industries to adapt their systems - or risk being left behind.
Combining generative AI with low-code can significantly enhance software development efficiency. Low-code simplifies complex tasks, enabling IT teams to quickly customize, iterate and deploy generative AI solutions. This powerful pairing empowers businesses to build and deploy generative AI-driven applications in record time and with improved workflows—compressing timelines from years to months and delivering tangible business value faster than ever before.
As companies navigate this transformation, they must also consider whether to build or buy software to maximize efficiency. This need to build software faster is also shaping how businesses integrate generative AI force alongside human expertise. This collaboration not only speeds up development but also ensures that companies can stay competitive in a rapidly evolving market.
The need for guardrailsWhile AI adoption opens new opportunities, it also comes with its own challenges. With the drive for faster software development cycles, there is a risk of technical debt and orphaned code if not properly managed and governed. Without a well-structured governing framework and guidelines, AI-generated code can quickly accumulate technical debt, making it difficult to scale and maintain. Growing technical debt could ultimately hinder businesses’ from staying competitive.
The implementation of generative AI can also bring some serious security concerns. Many generative AI models are trained on datasets which sometimes contain sensitive information, posing potential risks of privacy breaches. Additionally, generative AI may not always account for the latest vulnerabilities leaving systems open to hackers and cyberthreats. This is why it is vital that businesses establish clear AI governance and compliance measures to ensure ethical and secure implantation. This includes safeguarding sensitive information and ensuring transparency.
The new benchmark for successToday, digital transformation success is measured by key business outcomes such as speed to market, customer satisfaction and cost efficiency. Businesses need generative AI tools to build applications in minutes, but just as crucial are the guardrails that ensure these apps maintain quality, security or governance.
The use of AI-powered low-code platforms is allowing businesses to achieve their goals and deliver these benchmarks. Digital transformation is no longer a lengthy process – generative AI has expanded the possibilities for efficiency of innovation. In a world where speed measures success, businesses that succeed will be those who can transform at the pace of generative AI.
We've featured the best productivity tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro