iPhone 11 performance will of course be a further improvement over last year’s iPhones, but we don’t yet know much about the A13 chip set to power it.

We believe that TSMC will, once more, be Apple’s sole supplier, and that the chip will use a 7nm process with a new technology known as extreme ultraviolet lithography (EUV), but we today get some deep-dive predictions into what kind of performance improvements we might see …

Macworld’s Jason Cross has donned his speculative goggles and made some predictions about what we might expect from the A13 chip.

Apple is sticking with a 7nm process, as 5nm won’t be ready in time, but EUV should allow about 20% more circuitry in the same area. Couple that to the larger chip size expected, says Cross, and you get something with the same kind of transistor count as the latest iPad Pro models.

The result?

Apple will likely rely on some architectural tweaks and perhaps better peak clock speeds to increase CPU performance. After all, the company’s chips are already the fastest around, and it won’t take much to hold on to that crown.

Multi-core performance is trickier to predict, but could be as high as 15-16,000.

Cross expects less dramatic progress in the GPU thanks to memory bandwidth limitations, so thinks the A13 may be outpaced there by the very latest Qualcomm chips.

But if the CPU performance improvements track the current curve, and GPU boost will be modest, what will Apple do with the roughly 40% more transistors Apple has to play with? Cross thinks it will be all about on-board AI.

If Cross is right with his iPhone 11 performance predictions, what’s your view? Would you rather Apple devote more grunt to graphics-processing, or will beefier AI make more difference? Let us know in the comments.

Last year, Apple improved the Neural Engine in the A12 by far more than expected. The A11’s Neural Engine can do 600 billion operations per second, and Apple made the A12 about eight times faster at 5 trillion operations per second. I’m not sure we’ll see a leap that big, but Apple may well achieve a 3x to 5x improvement with some smart design improvements and a much bigger transistor budget.

Machine Learning and AI are critical parts of the iPhone experience, from taking better photos and videos to augmented reality and Siri. If Apple announced that the A13’s Neural Engine could do 20 trillion operations per second, I would be impressed, but not surprised.

The image signal processor used to process data from the camera sensors is another critical component that is hard to benchmark, but Apple invests heavily in it every year. It is used in conjunction with the Neural Engine and GPU to improve photos and video quality. Apple will improve it again this year. It might even be one of the first to include hardware to encode and decode the new AV1 video codec, a royalty-free video compression standard expected to succeed today’s HEVC, AVC, and VP9 formats. If you don’t know what all that means, just know that most web video (think YouTube) will probably transition to this new video format in a couple years. It’s extremely efficient and isn’t wrapped up in a web of complex royalties.

Render: Digit.in