Room 208

Wiki archive

[Tzetze 8/8/2011]

I want to fork the wiki. My main reason is technological; to me, the (well-known) problems with the wiki software, namely all of it, are more important than the “social” concerns that are often complained about. The pervasive lack of secure authentication, the terrible parser, the ugly code, the generally ugly site, that kind of thing, which Fast Eddie hasn’t really shown any desire or ability to improve.

However, I think that this is only the tip of the iceberg, and that the technical and the social problems could be dealt with in a unified way.

What is the point of TV Tropes? I think that it has been, since near the beginning (originally, I’m sure, the point was finding commonalities in Firefly and BtVS :P), to catalogue recurring patterns in media. And I think that it has been, and still is, successful at that! Right now the wiki probably has thousands if not tens of thousands of distinct tropes described, with pretty good lists of what works they show up in. The various controversies we’ve had, from Cosmetor to renaming Nakama to sock tropes have had pretty much no real effect on that goal.

(A fork, if you don’t know, means taking the existing pages on TVTropes, and copying them to somewhere else. This is of course perfectly legal mod the attribution list actually working :))

I think it’s a good goal. But I don’t think that it’s all a trope site could do. In 2009, one Kurt Cagle wrote an article on TV Tropes as a semantic website, and someone later tried implementing this. The “semantic web” is one of those buzzwords that everybody loves to hate. The idea ties in with computer knowledge representation (representing human knowledge in a computable format). Essentially, using a standard format like RDF/OWL, relationships between entities are represented with simple sentences made of a noun phrase, a verb/copula, and an object. “Cars/are a type of/motorized vehicle.” “Cordyceps/infects/grasshoppers.” And so on.

This hasn’t worked out well for human knowledge in general for a variety of reasons, such as the vast number of verbs needed. But I think it could work for the limited domain of tropes. You have a limited number of them.

And so on.

That’s the pie-in-the-sky vision. How would this be implemented, and what would be the effects for those people who don’t give a shit about knowledge representation? Well,

The program-readable listings, along with tags, could make searching and discrimination functions much more powerful. For example, if you hate anime, you could go into your settings and set works tagged “animated” medium and “Japan” nationality not to display. Or if you were doing an English major sort of project, you could get a list of all works using the Defiled Forever trope sorted by year of publication, or all 19th century works using subtropes of Living A Double Life, or any number of things.

The main disadvantage of all this is that it would take a lot of new coding, but it’s doable. It would also make copying the content from TVTropes nontrivial. But, I think this is a good way to make a TVTropes fork something both more fun to browse and more useful.

Kinks to work out:

Other issues:


Brief IRC discussion, 8/8:

Edit 19:50: http://semantic-mediawiki.org/wiki/Semantic_MediaWiki

IllFlower 2011-08-08 19:36 +0000


Looking at the Semantic MediaWiki documentation, it looks like it should be possible to do what we want without too much trouble with concepts and/or inline queries. The biggest issue, IMHO, will probably be possible performance issues involving the fact that in order to use queries to keep track of examples, each example might have to be on its own page–I’m not sure how well MediaWiki works under that kind of load(I think IllFlower’s our resident MediaWiki expert here). Actually, performance issues might be our biggest stumbling block with Semantic MediaWiki and this project in general, since according to SMW's Wikipedia page performance issues (albeit with Wikipedia, which is much bigger than Semantic Tropes will be) are the main stumbling block keeping them off of Wikipedia.

Edit 10:28: Looking at MediaWiki's database schema it seems that each page is just a line per page in a few tables and a line per revision in a few tables. This shouldn’t be too bad. Though it raises the question of how often these pages should be generated: Dynamically? Cache at fixed intervals? Cache each time the page is edited?

Edit 11:15: Apparently Wikipedia and other large wikis use a HTTP cache program called Squid for all their caching. It might be superior to do trope/works pages dynamically and just have them HTTP cached, though this might require extra wrangling with respect to removing pages from the cache.

was3 2011-08-09 08:57 +0000


http://gorogoroiki.referata.com/wiki/Special:AllPages http://gorogoroiki.referata.com/wiki/Main_Page Here’s a little proof-of-concept type thing. I’ll need access to the codebase to make certain things work the way we want, and I’ll need a certain unhealthy plant reproductive system to make it pretty.

was3 2011-08-09 19:26 +0000