I’ve been thinking about ways to optimize our use of AI within Supernotes, particularly around Vision and Custom Collections. Both are powerful tools, but I see an opportunity to make them work even better together, especially for those of us who use Vision regularly and rely on structuring notes efficiently.
Selective Vision: Currently, when you use Vision, it processes the entire card—including metadata, title, and content. While this is helpful, there are situations where we only need specific parts of the card analyzed. To optimize resource use and enhance control, I propose that Vision include these options:
Full Card (current option),
Exclude Code Sections (to focus only on non-code text),
Only Title, Tags, and Metadata (faster processing when just refining card details).
This would allow us to tailor Vision’s power based on the needs of each note, reducing unnecessary analysis and focusing AI on what truly matters.
Custom Collections: Custom Collections are a great way to group related notes, but they can become even more powerful when combined with Vision. Imagine collections that are automatically updated based on Vision’s summaries, tags, and links. AI could dynamically generate collections that group related cards based on context or projects, helping you find the right notes faster.
Here’s what I envision:
Dynamic Updates: As Vision processes new cards, it could update relevant collections in real-time.
Smarter Filters: AI-generated tags and summaries would improve filtering, allowing collections to be more relevant and efficient.
Prioritized Notes: AI could sort cards by relevance (e.g., recently updated or highly connected notes).
In summary, integrating Selective Vision with smarter Custom Collections could lead to a more efficient, AI-driven workflow without compromising the simplicity and focus that makes Supernotes special.
I remember @docfips mentioned he hadn’t found much use for Vision yet. I’d love to hear if these proposed changes might make Vision more useful to different workflows. Also, I know @freisatz has been excited about thoughtful AI features—would be great to get your thoughts on how this might align with the ideas you’ve been sharing.
Lastly, I’ve put together a technical document with more detailed ideas on how to implement these features. I’ll be sharing that directly with @tobias for further review.
Would love to hear your thoughts on this!
I do want to bring up that, just because I don’t use it, doesn’t mean it shouldn’t be. I just know a lot of companies are adding AI and the associated costs are going up, as they should because it is being used and is a business.
My worry is more that costs may go up for the few of us that maybe just don’t use it.
There are technical solutions that allow each user to pay for their own AI expenses to the provider, via an API credit. However, your previous comment that this vision “did not resonate” with you has been echoing in my mind for the past two days, making me wonder what could have dampened the resonance. I’ve been thinking about what needs to be polished further to make it almost transparent to us. That’s why I’ve come up with this new proposal. You’ve already provided your contribution, and I’d like you to continue participating.
Oh I understand, I just wanted to make sure it was known that, just because I don’t use it, doesn’t mean I’m against it.
I’m not against AI at all. I enjoy making the odd image generation here and there. And my hesitation, for lack of a better term, has nothing to do with SN at all. I just haven’t had a personal use case for it. I do, and have tried here and there, but it’s something that just hasn’t entered into my personal system yet.
something something, old dog new tricks, something something
I absolutely know that it has made a massive difference for a lot of people. And even recently, I was trying to make a collection and @tobias reminded me about the AI and that it would have done the simple task I was trying to make work. I asked it for what I was doing, lo and behold, it gave me what Tobias did lol.
There’s quite a lot to go over, but I think the core ask here is you’d like Vision to be configurable to only analyse certain aspects of a card. The main driving factor behind this I assume is that you’ve been hitting your Superpowers limit and to process cards faster as you mentioned. Sending less information won’t make your Superpower credits last longer. The more context given to an AI the better the response will be, so for both simplicity and accuracy, we will most likely always share the entire card for Vision.
However, the most important consideration that hasn’t been mentioned is privacy. That would indeed be a great reason to make a selective version of Vision. And was one of the biggest reasons why we didn’t add parent suggestions to Vision initially, since we’d have to send over a list of your entire card library.
We generally try to avoid large feature requests like these with many additional minor improvements, as it encourages scope creep and makes it harder to discover by other members later on, so for now I’m renaming this to be more specific to “Select what information is shared with Vision”.