We want users to have the tools to not be harassed, however that harassment might come about, but to still be able to have their profile and to use the system within the limits that suit them. As such the goal is to allow each user control over who can see some or all of their profile data, whose search results they will show up in, who can send them messages, and to be able to have at least some basic way of determining how trustworthy any given other user might be.
Visibility controls
The user gets to define what parts of their profile are visible to what people, by creating/editing visibility groups.
There will be some basic smarts available to auto-grant the contents of groups to certain sets of other users so that it’s not all done on an individual basis, but granting view permission will be as granular as a single user if required.
Vouching and verification system
In order to give other users confidence that they are looking at a real person’s profile, there will be various means.
Some are self / instance / moderator based options:
- Mods have seen and verified ID for the user, know their real identity, age etc. (though this is unlikely to be used much since it’s generally a bad plan to show the internet your ID)
- Mods have a trustworthy likeness of the user’s appearance (in some way that is resistant to deepfakes)
- User’s location has been verified by their instance somehow, with the instance then showing others if someone has a verified location within x km of their stated location
- Profile age will be at least broadly visible to prevent bot farms from spawning lots of accounts
- Verified links to other socials or websites or similar (like Mastodon has)
And some are generated by peers vouching for each other, for example:
- You and another user that you know can mutually state how you know each other (e.g. meatspace / internet friend from elsewhere / etc.) and believe that these accounts are under the control of the person that they know
- These linked users can sign a photo, or your location, or a block of text, as representative of you.
- There can be an anonymised representation of an account’s peer network, potentially showing if there is a mutual contact somewhere in the path and how many degrees apart you are if so. Conversely, if an account has no peers at all this can be highlighted as a potential risk.
Messaging control
In addition to global tools to detect problematic content in messages and flag them for moderation before delivery, there will also be some tools for users to add their own restrictions - keywords that they never want to hear, minimum lengths for first messages etc.
Moderation on both ends
All instances will be required to have moderators, and if a piece of content (message / profile text / photo etc.) is flagged as problematic by a recipient then it will first be reported to their own instance’s moderators. If the sender is on another instance, the report can then be forwarded to the originating instance who can Have A Word with the sender.
Blocking
Users should be able to block other users, and that will prevent either user from showing up in the other’s search results. The block should be based on some kind of global UUID for the account, so that if they move instances the block will be maintained.
With UUID-based blocking, we can’t do much about people signing up for new accounts instead of transferring, but it should be better than nothing.