• Post category:AI / Apple / ChatGPT
  • Post comments:0 Comments
  • Reading time:13 mins read

Apple announced its ambitious “Apple Intelligence” initiative at WWDC 2024, promising groundbreaking AI-powered features across its product lineup. While the announcement generated excitement, progress has been slow. Six months later, many features remain unavailable, raising questions about the viability of the promise.

The new iPhone launched in September 2024 without any Apple Intelligence features, though some began to roll out via software updates. Apple aims to complete the rollout by March 2025.

To evaluate this, it’s worth reviewing:

1) Released Apple Intelligence features so far – What’s available and how they function.
2) The broader promise – Is Apple’s vision for AI integration truly transformative or overhyped?

The mixed reception suggests Apple must address the delay and showcase meaningful progress to meet expectations.


Writing Tools

Apple Intelligence features are rolling out across the iPhone, iPad, and Mac, but notably, none are yet available on the Vision Pro, which is supposed to be Apple’s most futuristic platform. Among the current offerings, the writing tools stand out, enabling generative AI-based enhancements to existing text. Here’s how they work and their practical utility:
Key Features of Writing Tools:

Adjusting Tone:
Friendly Mode: Shortens sentences, adds exclamation points, and uses a casual tone.
Professional Mode: Maintains a terse, formal tone without exclamation points.
Concise Mode: Reduces text length by about 40%, making it arguably the most useful for trimming unnecessary details.

Proofreading:
Catches errors beyond basic punctuation, such as capitalization issues and proper noun formatting.

Summarizing and Table Creation:
Designed to condense or organize large documents into summaries or tables, though functionality appears inconsistent with very large inputs.

Offline Processing:
Everything runs entirely on-device, with outputs generated in 3-4 seconds, making it secure and functional without an internet connection.

Real-World Use:
While technically impressive, the writing tools have limited practical value for some users, even professional writers. Adjusting tone or conciseness may appeal to casual users for messages or work emails, but advanced features like summarization and table generation remain unreliable for handling large-scale tasks.

Verdict:
The tools work but seem underwhelming in their current form. They demonstrate potential for casual tweaks but fall short of being transformative or indispensable—especially for power users. More robust and reliable applications will be needed to fulfill Apple’s ambitious AI promise.


Notification summary

Notification Summaries, aims to condense notifications from apps into concise, easily digestible messages. It works by summarizing long messages or multiple notifications—like a group chat explosion—into one or two lines for quick updates. Here’s the breakdown:

How It Works:
Groups multiple notifications from the same app or long messages into a single, summarized notification.
Designed to provide users with the “gist” of their notifications quickly.

Real-World Experience:
Inconsistent Usefulness: The feature rarely produces genuinely helpful summaries. Most notifications don’t benefit significantly from being condensed, as shorter, isolated notifications are already easy to scan.
Humorous Failures: Social media is full of examples and memes showcasing amusing or inaccurate summaries, highlighting the feature’s shortcomings.
Limited Practicality: For many users, including the reviewer, the summarization often feels unnecessary and less helpful than the original notifications.

Verdict:
Notification Summaries is functional but not particularly useful for most users. While it can occasionally provide a laugh, its inconsistent accuracy and limited relevance make it a feature that many, including the reviewer, prefer to disable. This reflects another instance where Apple’s AI promises fall short of adding meaningful value.


Genmoji

Apple Intelligence also includes a custom emoji generator, a playful feature that uses AI to create personalized emojis based on user descriptions. Here’s how it works and its potential appeal:

How It Works:
User Input: In apps like Messages, users can type a description of the emoji they want.
AI Generation: The system processes the description in about three seconds to create a custom emoji.
Content Moderation: It blocks inappropriate or extreme requests, though it has been known to allow quirky combinations (e.g., a “potato with a gun”).

Use Case:
Creative Reactions: Ideal for expressing specific emotions or ideas not covered by standard emojis.
Limited Appeal: Likely to resonate more with users who enjoy personalization and novelty in communication.

Real-World Experience:
Novel but Niche: The feature is fun and creative but appeals to a very specific demographic. It’s not an everyday utility for most users.
Occasional Use: While some might enjoy creating unique reactions, many users, like the reviewer, find little reason to use it regularly.

Verdict:
The custom emoji generator is a neat, whimsical addition but lacks broad practical utility. It’s fun to experiment with occasionally, but it’s unlikely to be a game-changer for most. Like other Apple Intelligence features so far, it’s cool in theory but somewhat superficial in its everyday value.


Image playground

Another Apple Intelligence feature is the Image Playground, a creative tool for generating cartoon-style images based on text descriptions. It functions as a standalone app, offering a fun way to create custom visuals, though its practicality is questionable.

How It Works:
User Input: Users describe the image they want, adding themes, accessories, or backgrounds.
Photo Integration: You can start with recognized faces from your Photos app as a base for the cartoon.
AI Generation: The tool generates a few cartoon-style options in a few seconds, entirely on-device.
Output: Once satisfied, users can copy and paste the images anywhere.

Key Features:
Cartoon-Only Style: Generates only non-photorealistic, cartoon-style images, likely to avoid misuse or ethical concerns.
Predefined Prompts: Includes inspiration buttons for themes, accessories, or props to help users get creative.
Content Moderation: Restricts generation of offensive or controversial content, though users can sometimes bypass restrictions with creative prompts.

Real-World Experience:
Fun But Limited: It’s entertaining for casual experimentation, like generating quirky scenarios (e.g., yourself in a hard hat at a disco).
Moderate Reliability: While it avoids overtly offensive requests, users can occasionally game the system with complex prompts.
Debatable Usefulness: Like similar tools from other companies, it’s primarily a novelty. Its cartoon-only output limits its application, and it’s not a must-have for most users.

Verdict:
Image Playground is another example of a fun but niche feature in the Apple Intelligence lineup. While it highlights the potential of AI for creative expression, its use cases are limited to casual, lighthearted scenarios. It’s more a toy than a tool, unlikely to revolutionize how most people use their devices.


Priority notifications

The Priority Notifications feature in Apple Intelligence addresses a long-standing weakness in Apple’s notification management by using AI to surface the most important notifications or emails, especially during Focus Mode. Here’s how it works and its effectiveness:

How It Works:
Focus Mode Integration:
Surfaces critical notifications while keeping less important ones silent.
Aims to reduce the “firehose” effect of constant, overwhelming notifications.

Mail App Priority Sorting:
Uses AI to identify and highlight high-priority emails, placing them at the top.
Categorizes messages for better organization, similar to Gmail’s long-standing features.

Real-World Experience:
Useful for Target Users: For those in Apple’s ecosystem who use the default Mail app and rely on Focus Modes, this feature could improve productivity and reduce distractions.
Gaps in Usability: While it shows promise, its utility is limited for users who don’t use Apple’s Mail app or already rely on platforms like Gmail, which have more mature priority features.
Room for Improvement: Notification filtering could still be better, as some users might find irrelevant alerts slipping through or genuinely important ones getting filtered out.

Verdict:
Priority Notifications is a step in the right direction for Apple, providing a long-overdue improvement to notification management. While it doesn’t break new ground compared to existing tools like Gmail’s prioritization, it offers meaningful value to users heavily invested in Apple’s ecosystem—particularly those juggling many notifications or emails. However, its success depends on refinement and adoption by users who may already have alternative solutions.


Photos App

Apple has added just one AI-driven feature to its Photos app: Background Object Removal. Despite other companies packing their photo apps with AI tools, Apple has chosen to focus on this specific capability. Here’s how it works and why it stands out:

How It Works:
Edit Mode: Users access the feature by tapping the edit button and selecting the cleanup tool.
Automatic Detection: The tool often auto-detects unwanted objects and highlights them with a rainbow glow, making it easy to select.
Manual Selection: If auto-detection misses the mark, users can manually tap or circle the object they want to remove.
Generative Fill: The AI removes the object and fills in the background seamlessly using generative algorithms.

Performance:
Consistent Results: Works particularly well on scenes with patterns or repeating backgrounds, such as walls or skies, which are easier for the AI to fill convincingly.
Better Than Google’s Magic Eraser?: According to the reviewer, Apple’s tool excels at accurately outlining objects in a single gesture, making it faster and smarter in some cases.

Usefulness:
This feature is one of the more practical AI additions to Apple’s ecosystem. It aligns with real-world needs, such as removing photo bombers or distractions, and does so with precision and ease. While it’s not groundbreaking compared to offerings from Google or Photoshop, its implementation feels polished and effective.
Verdict:

Apple’s Background Object Removal tool is a solid, user-friendly addition to the Photos app. While it doesn’t innovate beyond similar tools from competitors, it demonstrates Apple’s typical focus on refinement and usability, making it a valuable feature for most users.


Recording summaries

Recording Summaries is an AI-powered feature with significant potential but a somewhat convoluted implementation. It allows users to record conversations, transcribe them, and generate summaries. Here’s how it works and where it falls short:
How It Works:
Call Recording:
Can be initiated during phone calls by pressing a recording button, which notifies the other party.
Records the conversation and creates a transcript.

Transcriptions:
Extremely accurate, including speaker identification.
Perfect for keeping detailed records of conversations.

Summarization:
Two summaries are available:
A preview summary shown in the Notes app, often quite detailed and accurate.
A secondary summary accessed within the note, which can be less accurate or inconsistent.

Limitations:
No Integration with Voice Memos:
Surprisingly, this feature isn’t available in the default Voice Memos app, where it would be most useful for lectures, meetings, or personal recordings.
Users must manually start the process in the Notes app by creating a new note and attaching a recording.
Inconsistent Summarization:
The quality of summaries varies, with the secondary summary sometimes missing key details captured in the preview.

Real-World Utility:
High Potential: Ideal for students, professionals, or anyone who needs to record and summarize long conversations or meetings.
Awkward Workflow: The lack of Voice Memos integration and the multi-step process make it less intuitive and convenient.

Verdict:
Recording Summaries is a feature with strong promise but marred by poor integration and workflow design. Bringing it to the Voice Memos app and streamlining the process would make it far more useful, particularly for students and professionals. As it stands, its utility is hindered by unnecessary complexity. Apple would do well to prioritize these improvements to unlock its full potential.


Visual intelligence

Visual Intelligence and ChatGPT Integration are two of the standout features on the iPhone 16 and iPhone 16 Pro, offering enhanced AI functionality. Here’s a breakdown of what these features do and how they compare to previous technologies:
Visual Intelligence (iPhone 16 and 16 Pro):

How It Works:
Camera Control: Accessed via a long press of the side button, opening up a full-screen viewfinder.
Buttons:
Shutter: Takes a photo.
Ask: Uses ChatGPT to answer questions about what the camera sees.
Search: Initiates a reverse Google image search based on the photo taken.
Photo Review: After snapping a picture, users can either search or ask about the image directly through a pop-up menu.
Performance:
The feature is fast and offers a clean, visually appealing UI. However, it’s not entirely new. Similar technology was available with Samsung’s Bixby vision, dating back to the Galaxy S8 in 2017. The integration of ChatGPT and the improved accuracy make it a more refined version of that earlier tech.
While it’s more capable and accurate, it’s not a groundbreaking innovation—it improves on existing concepts.

ChatGPT Integration (with Siri):
How It Works:
Siri and ChatGPT: If you ask Siri a complex question (like a recipe request or a detailed itinerary), and it goes beyond Siri’s standard capabilities, Siri will prompt you to use ChatGPT instead.
Enhanced Interaction: ChatGPT is then used to generate more detailed, thoughtful responses, tapping into its broader knowledge base.
Performance:
Improved Siri Experience: This integration aims to make Siri more versatile by utilizing ChatGPT’s advanced capabilities for complex queries.
Free Access: The feature is provided at no additional cost, improving the overall value of the iPhone 16 series.

Verdict:
Visual Intelligence provides a useful and more capable version of what we’ve seen in previous devices like Samsung’s Bixby, but with the added power of ChatGPT for better contextual understanding. It’s not revolutionary, but it is a refined and more accurate tool.

ChatGPT Integration with Siri is a long-awaited improvement, turning Siri into a more powerful assistant for complex tasks. While the new Siri animation is a visual upgrade, the true enhancement is the integration with ChatGPT, making Siri much more useful for tasks outside its usual scope.

Both features offer significant improvements, especially for those who use their iPhone for productivity and information gathering, but they aren’t groundbreaking innovations—more like smart refinements of existing technologies.

The Apple keynote clip featuring a person taking a picture of a dog to identify its breed, instead of simply asking the owner, is a perfect illustration of Apple’s AI features. While these features, like writing tools or object recognition, can work well, they often raise the question: “Should you rely on AI for tasks you could handle yourself?” They provide convenience but can sometimes feel unnecessary, like overcomplicating simple tasks. The clip symbolizes how Apple’s AI features may offer automation for things that could be done manually, leading to the larger debate about whether these technologies truly add value or just automate tasks that don’t need automation.

The idea of using ChatGPT to plan an entire trip, complete with an itinerary, is impressive in theory, especially with its ability to generate detailed schedules and suggestions. However, the question arises: Are people actually going to follow an AI-generated itinerary without any personal input or modifications?

For some, the convenience of having an AI plan the trip could be appealing, especially if they don’t want to spend time researching or organizing details. But for others, like you mentioned, there’s a desire for more personalization or flexibility in the planning process. People who enjoy packing or planning might want to have more control over the choices, like selecting specific activities or deciding the pacing of the trip.

In essence, while AI-generated itineraries can be helpful and time-saving, they may not be a one-size-fits-all solution. Many travelers may prefer a more hands-on approach or at least want to customize parts of the plan to reflect their tastes and interests, rather than simply following a completely automated plan.

Leave a Reply