LiveJournal Development's Journal
 
[Most Recent Entries] [Calendar View] [Friends]

Below are the 50 most recent journal entries recorded in LiveJournal Development's LiveJournal:

    [ << Previous 50 ]
    Thursday, January 13th, 2005
    5:29 pm
    [jproulx]
    Anonymous comment spam
    There's a new patch in CVS that disables all links in comments from anonymous posters: URLs from links will be displayed adjacent to the link text, and bare URLs won't automatically transform into links.

    The motivation behind this is to further curb efforts from spammers that are just trying to up their clickthrough ad traffic and search engine rankings.

    This is only the first part -- we'll soon add a new talkprop that "acknowledges" an anonymous comment, triggered either by unscreening the comment (if it was screened) or replying to it. Acknowledged anonymous comments will then show the links as intended.

    It's not live yet because we want to gauge your initial reactions -- if we need to we can add a per-user option to turn this off, but we hope that's not necessary.
    4:38 pm
    [mart]
    XMPP PubSub on LiveJournal

    Today I've been reading about XMPP PubSub and now have a general idea of how it works in my head. I'm still not entirely convinced that it's the best way to shift large volumes of entry data between the big sites, but I do think it'd be worthwhile to implement it anyway. The nice thing about XMPP PubSub is that we can implement it in two parts:

    • A companion to synsuck which can consume PubSub content (where the payload is an Atom feed) into type Y journals. This would presumably take the form of a special Jabber client daemon which handles the subscribing and then makes sure that the recieved items end up in the right journals on the site.
    • A component which publishes posted journal content from all journals. I'm still not sure of the best approach for this yet, but whatever happens it should only be called on to publish entries for which there are active subscriptions (or else it'll get really behind on LiveJournal.com). I'm currently a little confused about who is responsible for distributing the message to the various servers where there are subscribed clients, though. This component would serve a similar purpose to the /data/atom output, but without all the polling.

    The PubSub consumer seems like a good place to start, assuming there are actually data sources we can subscribe to for testing. It looks like the PubSub.com feeds aren't suitable because the client must connect directly to their server to do it, and they only serve up their own feeds. If I'm understanding correctly, the pubsub consumer daemon will need to have an account on a Jabber server through which it sends and recieves messages. LiveJournal.com has the resources to run its own Jabber server quite easily, and others who don't want to run their own could presumably just use an account on jabber.org or whatever, so this doesn't exclude anyone.

    If anyone wants to add anything or correct me, please go right ahead…

    11:40 am
    [banana]
    E-mail forwarding
    I have a support request open for a problem with e-mail forwarding. I'm not sure if/when/how it would come to the attention of delevopers, so this is a bug report. It's still happening. I can provide headers from a mis-directed message if anyone wants to see them.
    Wednesday, January 12th, 2005
    11:41 pm
    [drumguy2002]
    4:09 pm
    [laochbran]
    Beyond a server-based content distribution network
    [info]mart has raised some very interesting ideas about how to integrate the social networks of journaling sites. The models he has proposed are sensible, logical extensions of the way Usenet was built, however they introduce new challenges in terms of server performance. To be specific, a bunch of traffic between servers is created and managed so that content is duplicated around the place. This creates a number of challenges:

    Sites lose control over who receives served content. If my friends-only entries get copied into a cache at DeadJournal, then I am now trusting DeadJournal to maintain my security as well as trusting LiveJournal. Ideally, the number of trust relationships anyone has to enter will be minimised.

    Sites serve content from other sites. Proper payment for hosting can become problematic if someone establishes a primary relationship with one site, but most of their content gets served by another site. The most likely consequence is that big sites like LJ will be paying for even more bandwidth and disk, subsidising smaller sites. This then requires the establishment of chargeback mechanisms or other cost-recovery devices, and the difficult politics imposed by the trust issue get even worse.

    A fairer system would be one in which sites serve their own content and nobody else's, and yet are able to provide a unified friends page. This can be achieved by separating the aggregation function from the content-delivery function, and using client-side includes.

    The system might work like this:

    I send a request to LiveJournal for my friends page. LiveJournal sees that my friends page is 50 entries long, and finds the ID and timestamp of 50 entries that it might serve to me. It then sends requests to TypePad, DeadJournal and GreatestJournal containing the lists of friends that I have at those places, and a timestamp range. TypePad, DeadJournal and GreatestJournal send URLs (but no content) back to LiveJournal for any entries that meet those search criteria.

    LiveJournal then generates a friends page that contains client-side includes for the entries that are assembled into my friends page, and my web browser is then responsible for fetching the entries back from the various sites where they live. (Would this method increase the ability of my web browser to cache LiveJournal entries locally and reduce the load on LJ when I refresh my friends page? That's a fun possibility.)

    In version 1 of the protocol, it might be best to get only public entries from aggregation partners. Version 1.1 could involve incorporate an assertion of my identity to those sites, allowing more secure entries to be distributed to me.

    The weakness of this method is the delay introduced by the dynamic query to other sites to fetch a list of content, the possibility of timeouts and so forth. Ideally, this could be managed with a highly-intelligent client. Alternatively, it simply becomes a marketing opportunity for our Evil Robot Overlords (who have been nice to support people already - I like that a LOT) who get bragging rights over the fast integration between TypePad and LiveJournal, but can say "Integrating with sites outside our Evil Corporate Alliance may delay processing of your friends page."

    If the version 1 (non-secure) version of the protocol is implemented, it should be possible to cache data pretty effectively.

    Perhaps LJ and TypePad can market an ultra-clever browsing client that does a lot of this integration at the client end to match their ultra-clever posting software?

    Current Mood: creative
    Tuesday, January 11th, 2005
    6:49 pm
    [mart]
    Content Distribution Network

    While thinking about the problem I talked about in my previous entry, it occured to me that it is quite wasteful for every site to have to talk to every other site. Instead, we can borrow from the USENET model and create a structured distribution network. For example:

    A completely hypothetical network layout, of course. The basic principle here is that each node has a set of peers and keeps track of which of those peers are interested in each journal. Subscription control messages as well as entry state changes are passed around the network through these channels, and since the links are created through co-operation between two nodes they can either be persistant sockets or pull-type connections depending on the needs of the two peers. Nodes must also track which journals should be forwarded on to neighbours, to avoid redundant forwarding and ensure that smaller sites don't get overwhelmed with data.

    All of the nodes need to know about all nodes which produce content. To avoid nodes tampering with the data as it passes through the data is signed and each content-producing site has its own keypair. Key exchange is the tricky part, as it is the only part of the process where every node must connect to every other node directly so that everyone has everyone else's keys.

    As you can imagine, this is a closed network as it requires co-operation between nodes. This is much like USENET, but the network will be a lot smaller. The obvious question is "What's in it for the sites?", and that is a good question. Big sites benefit from reciprocal links because they are trading valuable content, but bigger players have no real reason to let the little players in. As distasteful as it may seem, someone has to pay for these things, and so the worst case is that the USENET model is followed where a peer pays an upstream provider to let it feed from them. This isn't really that bad, as a bunch of smaller services can co-operate together to get a single link to the main network and share costs between them.

    I think this, really, is the only feasible model for now. If we design it right it could be general enough to let producers and consumers that aren't LiveJournal-based in later, such as (for example) TypePad pushing content into the network via LiveJournal, and aggregator peers which suck data from a bunch of RSS feeds and republish them onto the network as well as user-oriented aggregators which only consume content and provide something not unlike a LiveJournal friends page for those who don't have any wish to publish but want to read. That's for the future, though... for now, it'll probably just start as a small network between LJ and DJ and perhaps Plogs.net. What do you think?

    2:50 pm
    [mart]
    Efficient Linking of Weblog/Journal Sites

    It would be very cool if certain aspects of the social networks created by the different LJ-based sites could be absorbed together. Right now we have the rather lame solution of adding RSS feeds from other sites, which taken to the logical extreme means that all of the sites end up sharing one namespace, but all of the journals have different names depending on where you are. This is a mess. RSS works okay for pulling in content from outside, but when it comes to other LJ-based sites we could do so much better, and do it much more efficiently. I should be able to add scsi@deadjournal.com to my friends (or "watch"!) list on LiveJournal and have the entries from that journal appear on my friends page. We talked about this a few years back, but the situation has got a lot better since then as other people have done some of the things we would have had to do.

    On the surface this doesn't seem too hard: we need some efficient mass-transfer protocol so that the sites can pull updates (create, edit, delete) in batches between each other, and some way to express what journals each site wants. Sending the list of journals over each time would be a bit lame, so each site could instead maintain subscription lists for the other sites it can link with. Of course, this requires co-operation between the different sites, so it's not brilliant. Also, I'm not entirely sure if we have the right data to be able to distinguish the create, edit and delete operations... but syncitems does this for a single journal, right? Since these special accounts won't have any of their own journal views, the entries can be safely deleted once they're too old to appear on a friends view.

    By now the Atom folks might well have something we could use for this. Back when we originally discussed it I was pushing for creating our own XML format, but that was before the Atom folks came along and did basically what I was proposing. However, last I was tracking Atom they were still trying to decide on the XML format and not really near any kind of API for pushing content around in an organised fashion.

    Obvious caveats: need to be able to comment on other sites with a LiveJournal account and vice-versa (decentralised TypeKey-style auth fixes this), handling security-locked entries securely (can't without an user-trusts-site relationship), knowing how many comments there are on the remote entry (or just make it say "Read Comments" for those)... and do we let (say) DeadJournal users post in LiveJournal communities? Should communities be shared too?

    Yeah, this entry doesn't have any concrete stuff in it. I think it's worth working towards a proper design for this, though, as it's something we've talked about forever.

    Saturday, January 8th, 2005
    7:45 pm
    [bsdguru]
    MogileFS and Perlbal under FreeBSD
    Seeing that I have issues with the mailing lists I'm going to post this here.

    I'm busy looking at running MogileFS and Perlbal on FreeBSD servers but there is specific code which relates to AIO::Linux and various other bits and pieces which is included with Perlbal which currently does not allow one to run these systems under FreeBSD.

    Are there any plans for Perlbal and MogileFS to run under FreeBSD or use non portable perl modules in future?

    I'm busy working on a pseudo web proxy for the FreeBSD ports system to use which would be located on slow leased-line sites which have multiple boxes to save b/w by reducing the number of times distfiles are downloaded when they've already been fetched once.

    Current Mood: tired
    Friday, January 7th, 2005
    7:48 pm
    [bradfitz]
    Six Apart and ownership of code
    I know some of you are still apprehensive about the Six Apart acquistion, so I wanted to give you guys some updates.

    Six Apart doesn't want to own your code.
    They don't even care too much about owning Danga's code. They just want a right to use it, and they want to make sure somebody is able to legally go after any company that tries to violate the GPL'ed code by selling server software based off LiveJournal code without releasing their modifications. (no, this does not mean attacking DeadJournal, because DeadJournal doesn't sell/distribute their changes, and DeadJournal even gives stuff back from time to time....)

    We're in talks now with the Free Software Foundation to figure out how to best do this. (well, they sent us some mail offering help and we replied saying that'd rock, but that was just today, so "in talks" might be overstating it... but we're working to start talks)

    Under our old pre-SixApart TOS it said something like we own all your contributions. That was just boilerplate stuff that I personally never wanted it in there, but I wasn't good at legal documents, so I left it. When SixApart bought the company they left that in, but their lawyers changed some words a little to make it clearer. So while technically SixApart could arguably own your contributions now (if you agreed to the TOS), they don't want them. We want you guys to own them, as long as we have a license to use them. (under the GPL, Artistic, BSD, whatever....)

    I'll give you guys an update when we're further along in these talks. Probably next week ... not this weekend.

    Future LiveJournal contributions
    People think we're ditching this codebase and moving to TypePad or Movable Type. You know how hard that'd be? Think how many LiveJournal features and formats we'd have to port. We love the LiveJournal code and associated codebases and you're still going to see us hacking on them, adding new stuff, fixing stuff, etc.

    To come..
    More announcements when we know more. Feel free to flood us with questions and we'll answer. We don't want anybody left in the dark about stuff.
    4:03 pm
    [bradfitz]
    partial ack tcp corruption bug
    Thanks to all your bug reports and especially the tcpdumps, F5's been able to find and fix (in only 3 hours!) the TCP corruption issues you've all seen.

    The new code is running now and should fix the problems you've been seeing.

    Please report any problems that happen past this point. And with pcap files, ideally... thanks!

    (FYI: we're running some pre-release BIG-IP code because we do some bizarre HTTP and load balancing stuff and they wanted us to test it...)

    But it's totally worth it. We can do things like:
    TCL config code... )
    5:05 pm
    [mart]
    Corrupt Data

    Something is corrupting data between the web servers and my client. The most obvious symptom of this is that userpics and picpix/pics.lj images are getting distorted, but it's also corrupting the gzip-encoded data streams on journal views and causing pages which have all of the right characters but in the wrong order. It's intermittent, though; refreshing usually fixes it or at least changes the nature of the corruption. Several other people have mentioned this, too.

    Given the timing, I'm guessing the blame lies with the new network configuration.

    Thursday, January 6th, 2005
    10:55 pm
    [telyio]
    Typekey.
    With the merging of Six Apart and Livejournal, will we be seeing Typekey integration with Livejournal comments? This would be the sort of thing that I've been waiting to see from Livejournal for years, and hope that this is being considered.
    Friday, January 7th, 2005
    2:12 am
    [vasaki]
    multi-dimensional categorization
    I know that this topic has already been discussed several times here, but nevertheless.

    I don't use LJ or any other popular blogger for publishing. Let me explain why. I personally could use blog for one reason only - I view blog as knowledge base with possibility to make certain parts of it accessible to people. The concept of "Knowledge base" is very much disputable, I see it as a carefully classified information snippets collection. The key thing here is "carefully classified". A simple categorization is not enough, each snippet has to be classified in several dimensions. For instance, I might have certain pieces related to IBM, and other pieces related to HPC (high-performance computing). Then, the article about "Deep Blue" I would assign to both classes - IBM and HPC to make the future references easier.

    It is a very modern area of research nowadays - ontologies, Semantic web, semantic blogging and the like. However I don't think blogging needs all that technology now, all it needs is a multi-dimensional classifiaction, as I described it here. A certain central structure of available classes is also nice-to-have. For instance, in addition LJ could contain a central list of keywords, meaning of which are unambigous, which people can use as classes for their postings. For example, central directory could contain a tree-like list like this

    - Companies
    	- IT
    		- IBM
    		- Sun Microsystems
    		- Microsoft
    		- SAP
    	- FMCG
    		- Nestle
    		- Kraft
    		- P&G;
    - IT area
    	- ERP
    	- Web
    

    Thus, I could list, for instance, everything classified as "Microsoft" and "ERP" to find out any activity of Microsoft on ERP market. This is more close to so-called "Semantic blogging" and I tend to think it to be quite utopian idea, and not urgent at all. Simple multi-dimensional user-specific classification would be enough for the beginning.
    Of course, to make any use of such classifications, we need to add some sort of query language.

    Links:
    http://nudnik.ru - russian blog engine that is exactly the concept I'm talking about (check small keywords after # sign. Some posts contain several keywords.

    http://www.livejournal.com/community/lj_style/237565.html - simple one-dimensional categorization in LJ. The number of entries in each category is huge, very difficult for searching.

    http://www.livejournal.com/community/lj_dev/664279.html - parts of the discussion is relevant, my post doesn't care at all about the "Friends" concept

    http://jena.hpl.hp.com:3030/blojsom-hp/blog/ - Semantic Blogging demonstrator, scroll to the bottom to see the categories.
    Wednesday, January 5th, 2005
    11:48 pm
    [fg]
    more HTTP
    Here is another observation on how HTTP replies are getting mungled:

    If I disable Keep-Alive in the connection request, and only supply one request per connection, things seem to go a lot smoother. Chaining requests in one connection seems to exacerbate the issue.

    I don't know if the fix is complete, but it seems to be working for now.

    Also, someone pointed me to a post about blocking ljArchive. Is there something I can do to make the client more friendly to the servers?
    Sunday, December 26th, 2004
    4:01 am
    [ibjhb]
    Nesting Comments
    What is the determination for not showing full comments? If you go look at posts that have a large number of comments, some of the comments only show the subject, who posted and when. How does the LJ server determine if it should show the full thing or not?

    Current Mood: productive
    Current Music: Story of The Year - Anthem of our Dying Day (Energyradio.FM - Energy X)
    Tuesday, December 21st, 2004
    10:28 pm
    [cemcom]
    Membership lists for communities with more than 500 members
    I've been doing research on visualizing the LiveJournal community space using implicit links between communities. In particular, I'm using the number of members two communities have in common as a weighted edge between those communities. This type of visualization seems to be useful in making salient the strength of inter-community ties and visualizing possible relationships between communities (e.g. mediatory links between two communities-of-communities).
    A brief write-up of the research (with pictures!) is here: http://cemcom.livejournal.com/7511.html

    A large issue with what I'm doing, however, is that I'm not able to get membership lists for communities with more than 500 members. These communities doubtless play key positions in the LJ community-space, and any analysis failing to include them is probably incomplete. The community info page (which is what I've been scraping, because it has more of the data I wanted to collect, compared to the FOAF feed) is limited to displaying 500 members. Is there a way I could get this data? I would be willing to write any code necessary, if that is helpful, though it seems like a simple SQL query would take care of my needs.

    Thanks much,
    Eugene
    Sunday, December 19th, 2004
    5:25 pm
    [fg]
    HTTP/xmlrpc
    Hi all,

    I'm seeing strange things happen when I call GetEvents through xmlrpc.

    Sometimes the connection simply closes with no response

    Other times, I get what seems like otherwise proper XML, but with strange strings inserted like this:

    HTTP/1.0 200 OK
    Date: Mon, 20 Dec 2004 00:12:54 GMT
    Server: Apache
    Set-Cookie: ljuniq=5HTnztLtYkRSwZ1:1103501574; expires=Friday, 18-Feb-2005 00:12:54 GMT; domain=.livejournal.com; path=/
    Content-Length: 228115
    Content-Type: text/xml
    SOAPServer: SOAP::Lite/Perl/0.55
    Keep-Alive: timeout=30, max=100
    Connection: keep-alive
    
    image/png
    <?xml version="1.0" encoding="UTF-8"?><methodResponse><params>...
    what is that 'image/png' doing there? I put the request that generated the above reply here in the cut )

    Any suggestions? Is there something suspect with my request or the request's http header?
    Friday, December 17th, 2004
    8:55 pm
    [commie_coder]
    hello from a php hacker
    hi all

    apparently when you join you're supposed to introduce yourself, so here goes. i've been working in php/mysql for the past couple of years, and i've decided to learn perl and particularly mod_perl.

    so, i'll try and contribute to lj development as part of my learning process - hopefully it will be a win win situation :-)

    i'm quite new to perl, but not new to programming or web development, so hopefully i'll be able to take something on sometime soon.

    thanks
    justin
    Sunday, December 12th, 2004
    7:04 pm
    [triptrashbang]
    Meet the Demand for Categorization
    Hello everyone,

    This is my first post to the [info]lj_dev community. I have been using LJ for about two years and support LJ through paid membership.

    I used to use other blogware such as b2 and Movable Type; I joined LJ because of the user-base makes it easier to connect to other people.

    Like most other experienced LJ users, however, I find myself running into problems that have yet to be addressed by the LJ feature set and are being circumnavigated with methods that aren't necessarily the "healthiest" ways to propagate communication within an online community. I've seen a lot of people start wondering why these features aren't implemented in LJ.

    In particular, I'm talking about the most common problem of categorization of entries and subscription access to those entries. The demand for this feature within LJ has been made obvious not by the direct demand of such features but by how people are adopting methods of dealing with the lack of such features within LJ.

    There are two ways of dealing with lack of categorization in LiveJournal: The "Friends Only" Journal and the "Multiple Account" scenario. Both methods have serious drawbacks and are not the best way to maintain healthy connections online. The "Friends Only" approach is exclusionary, and the "Multiple Account" approach creates extra management overhead on the part of the user and that user's friends. (I would get into more detail about the problems of both methods of approach but I believe that would be beyond the scope of this community.)

    Anyone not using either of these two methods may opt to create group filters for specific entries, but then runs into the problem of playing "Blog God," having to decide and keep track of who really has access to each post. This is okay when we get to know people through interaction, but becomes a monumental headache when we've just added people to our friends list or are unsure what may or may not be appropriate for all readers.

    The other option that most people adopt is the "all or nothing" approach, where everyone on the friends list gets to read every entry posted by the user. The problem with this methodology is we are undoubtedly creating bad surfing habits--people left without choice of what to read skim through it all and until something "interesting" hits them. This psychologically objectifies all users into faceless, soulless beings, as what we experience is turned into commodity for mass consumption. Often this leaves both ends of the experience feeling empty, often turning to the aforementioned "friends only" blogs after feeling used and exploited.

    An obvious solution to meet the demand of what one might call selective reading (similar to a pick and choose market?), is to allow categorization of entries and allow user-subscription to such entries. Created properly, this feature has the ability to become very powerful for the following reasons:

    1. Users can maintain one account and not worry about having diverse interests,

    2. Each category can be subscribed to, thus eliminating the need for the user to be exclusive or ignore other users (unless one really must)--in other words, it nurtures the original premise of the internet as it turns it back towards a more "sharing" atmosphere,

    3. Eliminates the need for a "Friends Only" journal, as users can feel comfortable posting to categories that are not sensitive, allowing access to a larger "friend" base,

    4. Allows users to be selective of both who's journal they are reading and what they are reading about, gives the power to the user to select from what another user is already sharing,

    5. Depreciates the "fan lists" by allowing users to gradually get to know other users as they interact, rather than subscribing to an "all or nothing" instant-gratification method of adding friends

    6. Can be used as a "front-door" mechanism on the user-info page, and each category can have it's own interests--thus making it easier to choose which categories/interests to share with others.

    There are a lot of very dynamic possibilities with a categorization system within LiveJournal. I haven't even really scratched the surface. But I think that if LiveJournal doesn't get on the ball with this soon, someone else will rise to meet the demand of this type of journaling methodology.

    I believe the time has come to incorporate human psychology into the software, rather than letting the software dictate how we communicate with others. (I believe it's creating a lot of dysfunction, perhaps?)

    Regards,

    johnny :)

    Current Music: P h i l o s o m a t i k a
    Wednesday, December 8th, 2004
    9:03 pm
    [andrewducker]
    Friend Count
    I was reading recently about The Long Tail and the clustering affect that occurs where you get an inverse relationship between the number of people that have X friends and X. (5000 people have 1 friend, 2000 people have 2 friends, etc, etc. until only a few people have 200 friends). Anyway - that data isn't easily accessible, so I was wondering if it was possible to put together a graph where X was "number of friends" and Y was "number of people having that many friends".

    If anyone with access has 10 minutes free and a graph-making program handy. that is :->
    Monday, November 29th, 2004
    6:28 pm
    [perlmonger]
    internal/external links (and hello)
    Hi, I'm Pete Jordan; been reading here for a while, now joined because I want a change that, as far as I can see, I can't kludge though S2.

    I'm in the process of reskinning my LJ, as far as possible via a stylesheet on my own server but via magazine-based theme layer hacks where needed. One thing I want is to visually distinguish internal (LJ) and external links; mostly this is ok, but I've not found any way to so distinguish poll links.

    So. What are the chances of mangling the poll rendering code to either wrap the whole thing up in a div with a (say) "ljpoll" class or (better from my POV) generate all internal links with a class of "lj"? This latter is what I'm doing in my theme layer code.

    I'll gladly contribute a perl patch, subject to my lack of tuits.
    Sunday, November 28th, 2004
    10:53 pm
    [mart]
    “Killfiles”

    Users clearly want the abilty to ignore every entry and comment from a given user, even in communities. It seems that people are now resorting to doing this in S2 layouts, which is clearly the wrong place to address this. It seems to me that implementing this kind of thing shouldn't be too hard. The main problem is that it's hard to filter out items without reducing the number of items present due to the way the data fetching works, unless the filtering goes in right in the guts of the API, which I think is the wrong place to put it.

    I'm a bit rusty on this stuff since I've not been doing much other than S2 for a while now, but I'm thinking perhaps a couple of new API functions which can be called at some point in the data-fetching process to filter out entries and comments respectively. Depending on where in the proceedings this is done, I expect that the input to the former would be a list of entries and the latter would be a tree of comments.

    There's a Zilla bug filed about this already, but if there's any discussion to be had then let's have it here since Bugzilla sucks for discussion.

    Wednesday, November 24th, 2004
    7:20 pm
    [mart]
    Little Toy

    Today, as an experiment, I wrote a little tool to translate Xanga's “Skins” to LiveJournal S2. The results aren't spectacular because the source data isn't, but one day perhaps I'll use this code to make an LJ-specific template system as simple as Xanga's.

    You have to join Xanga to access the skins directory, but bugmenot can probably help out there.

    Friday, November 19th, 2004
    2:47 am
    [omni1100]
    More Journals Per Person
    Hello Folks,

    I am very new to this whole deal, and have just started looking in to it. I am posting this question here, but I am not entirely sure that it is in the right place. I'm sorry if this is incorrect.

    Basically, I am looking for an "addon" to the LiveJournal code. Is there some sort of website that lists "mods" or addons for LiveJournal? Or do mods not even exist in this community?

    What I am looking to change is very simple. I think the core code in LiveJournal is great. But what I would like to be able to do, is make it so that each user can have more than one journal to their username. The way it is now, if I want to have more than one journal, then I need more than one username. I need one username per journal. But is there a way to make it so that each user can put as many journals under their username as they'd like?

    Again, I'm sorry if this is in the wrong place. Any replies would be very helpful. My email address is vincec@rochester.rr.com.

    Thanks very much,
    - Vince
    Friday, November 12th, 2004
    12:09 am
    [mart]
    Poor old Zilla

    It's broken again.

    Tuesday, November 9th, 2004
    5:26 pm
    [lightmanx5]
    More on RSS
    I noticed that I could input LiveJournal specific tags (as long as they were of "&lt;lj user=" type and not "<lj user=" type). Also...I made some changes to my XML file... and I noticed that LiveJournal will reflect some of these changes (after it updates itself to the file of course), but sometimes it won't. I was wondering if anyone knew the process therein, and could describe how all this stuff works. Thanks so much for your time!

    Current Mood: amused
    Current Music: Fall River - At Least You Sent Flowers
    Monday, November 1st, 2004
    12:22 pm
    [mannu]
    Why does LiveJournal strip off HTML comments?
    I'm trying to implement category-specific RSS feeds for my journal, by reading in the main feed, and filtering based on metadata embedded into the posts -- in the form of HTML comments. Unfortunately, LiveJournal strips off HTML comments. :( Why?

    I've written about it here on my journal. Any suggestions on how to implement this?
    Tuesday, November 2nd, 2004
    5:36 pm
    [fg]
    export comments
    Hi folks,

    Did export_comments.bml break? I'm getting empty replies when I request the url.
    11:32 pm
    [legolas]
    3 posting dialogs UI design error.
    I think the quick comment's UI and the preview page's UI is wrong. Both look very similar (2 buttons and a checkbox), these things present the same basic options (post, preview, and spell check), but they are very different. To make matters worse, the quick comments preview leads to the normal comment page. In fact, the buttons at the bottom of the 'new entry' page present yet another way to again present the same three functions (this time with three buttons).

    On the quick comment box:

    [post comment] [more options] [checkbox meaning: don't post comment when you click post comment (!!!) but preview and check spelling instead]

    On the preview page:

    [post comment] [preview] [checkbox meaning: also check spelling when you click preview]

    On the New Entry page:

    [Update Journal] [Spell check which also previews!] [Preview]

    You know the jokes with the button you have to click but that keeps jumping to a different location when you approach it? Well, this isn't all that different, except this is not meant as a joke!
    Sunday, October 31st, 2004
    10:59 am
    [mart]
    Phone-posting to Other People's Journals

    Last night I made a phone-post but I think I mistyped my caller-id and/or PIN and managed to hit someone else's journal instead. I don't know where it went, but it said it was posted successfully and yet it is not in my journal today.

    I think it would be useful if the phone gateway quickly spelled out the name of the journal associated with the callerid and PIN you entered so that you can see if you got the right one. Of course, this does have a major security implication: if you do mistype and manage to hit someone's journal, you then know whose journal you've got access to which might cause malicious people to cancel their current phonepost but make a note of what they typed in for later abusing.

    While on the subject, I'd also like to optionally have the phone gateway tell me what it thinks my callerid is when I press the “pound” key to auto-login, since despite trying several combinations I still haven't found what LJ is seeing as my callerid when I call from a British number through my unofficial 0870 (national rate) access number. If I didn't have to type the callerid in every time there'd be less chance that I'd get it wrong in the first place.

    Saturday, October 30th, 2004
    10:26 pm
    [legolas]
    I have something weird (mozilla 1.7.3), that I think is new:

    Open a page with comments to an entry. Use the top 'post a comment' link. With the quick comments feature on, a box appears in the page. Enter some text, check the 'Check spelling and preview', then click the 'post comment' button. On the preview page, use the back button in the browser, then click the 'post comment' button again. You'll end up at an error page.

    (Workaround: click on the 'post a comment' link again, and your comment box will resize a bit, and then you can click post comment again.)

    Can anyone confirm that this isn't just me?

    (Discovered while posting a comment on an lj_backend post while lj was acting up.)
    Thursday, October 28th, 2004
    3:27 pm
    [lightmanx5]
    Syndication
    Is there a section of the LiveJournal website that deals with developing your RSS/ATOM feed specifically for LiveJournal? I'd like to learn how to generate my RSS/XML document so that it does what LiveJournal is looking for, you know what I mean? (If you want the current username of the syndicated account, go ahead and ask, but I don't mean to "promote" it here.)

    For example, there's obviously all this (link) in the FAQ about Syndication, but it doesn't tell me the developer-type information I need to know. I want to know things like, what does LiveJournal do with this piece of code:

    


    Current Music: Falling Cycle - I Still Dream (Part 1)
    Wednesday, October 27th, 2004
    12:15 pm
    [benfranske]
    Problem with vhosts
    [Error: Irreparable invalid markup ('<virtualhost *:80>') in entry. Owner must fix manually. Raw contents below.]

    Please excuse me for posting here, but I've had this problem in lj_everywhere for a few weeks now and haven't been able to get it resolved, so I'm hoping someone here can tell me what I'm doing wrong.

    I'm trying to run LJ as a vhosted subdomain on one of my servers, but it takes over all the vhosts for some reason. Here's a snip from my httpd.conf:

    <VirtualHost *:80>
    ServerName blog.edinahigh.org
    PerlSetEnv LJHOME /home/lj
    PerlRequire /home/lj/cgi-bin/modperl.pl
    </VirtualHost>

    <VirtualHost *:80>
    ServerName mail.franske.com
    DocumentRoot /home/www
    </VirtualHost>

    For some reason if you go to mail.franske.com you end up with the LJ page even though you should be getting /home/www. If I comment out the PerlRequire line in blog.edinahigh.org everything returns to normal and vhosts work as expected (except of course you can't get to LJ). The modperl.pl file seems to be overriding my apache config. Any ideas?
    Tuesday, October 26th, 2004
    11:16 pm
    [crschmidt]
    RDF Data on LJ
    As an RDF nut, I love seeing more data exposed as RDF. LiveJournal is one of the world's largest sources of RDF data, in the FOAF it exports, and in many ways tends to be on the leading edge of new ways of exporting data for use by its users. However, in a few ways, LJ is lacking, especially in the RDF department.
    • Latest-RSS.bml - Although this is RDF, it is not properly formatted. Specifically, the fact that <lj:mood id='' /> uses an un-namespaced "id" as an attribute. Fixing this up is a one line patch, available here.
    • RSS 1.0 feeds for entries - Although the forms we currently export (RSS 2.0 and Atom) can be converted to RDF (RSS 1.0 is an RDf-based format, compared to RSS 2.0, which is just XML), it would be nicer for use with RDF tools to provide an RDF form. This is (almost) trivially simple, and could allow for adding the Metadata described in this lj-dev post with almost no work.
    • FOAF updates - There are a couple changes which could be made to FOAF which would be useful to many tools without significant additional data load for the site: the display of actual locations (using the VCARD spec), exporting birthdates in a more commonly used format (using the bio: namespace, rather than an RDF predicate which is not in common use) and adding a user picture as a depiction of the user are the most common requests I've seen. (The beginnings of a patch for this are available.


    These are all relatively simple changes, which I would be totally willing to submit patches for, but I'm not sure that there's any want for more RDF on LiveJournal. I know that I want it, and I know I've been asked by several people why LJ doesn't export RDF-based RSS feeds.

    I'd like feedback. Should I go ahead and work on patches to increase RDF output? These things would basically correct current data or lie alongside current options, and would only be used by tools which understand it, most likely. (Of course, there's also the fact that users might see others and get confused, but really, users almost always end up confused in some way or another.)

    Can I get either an "Amen" or "Shut up and go away" for more RDF on LJ?
    Thursday, October 21st, 2004
    8:27 pm
    [crazyscot]
    So, about this PhonePost issue...?
    Q. When is a WAV not a WAV?
    A. When it doesn't have a RIFF WAVE header.

    (Since I drafted this post I've had a search around and saw that there is a "known issue with phone posts" in the Support yellow box, plus a few open support requests saying phoneposts don't play, but I can't seem to find discussion of what that issue is. [I have no support privs.])

    To cut a long story short, a friend tried out PhonePost today, and I wanted to hear what he had to say. The alleged WAV file he recorded carried a MIME type of audio/wav when I downloaded it, but clearly wasn't one; poking into it, it didn't even have a standard RIFF header (which specifies the sample rate, encoding type etc) - it just seemed to be garbage. (There's a possible related issue here, in that I think my friend specified Ogg Vorbis format for his phone posts, and that was ignored; but he's best placed to take that up.) I tried a few plausible options to sox but didn't get anywhere.

    I poked around the LJ CVSweb for clues; eventually downloaded the libgsm-tools Debian package and was able to use tcat to convert the file into a raw μ-Law which I was then able to pass to sox to play. (tcat -u -p foo.gsm | sox -r 8000 -b -U -t raw - foo.wav was the rune I figured out.)

    At the very least, it would be nice if the LJ backend would add proper RIFF headers to these files. (I'm assuming GSM is indeed a valid compression scheme for the WAVE format; if it's not, then please stop misrepresenting them!) If GSM is intended to be the "way forward" for non-Vorbis phone posts, then judging by the support board and my experience today there seems to be an urgent support issue to resolve here, or at the very least an FAQ to write on how people should play the darn things.
    7:17 am
    [mart]
    Zilla is broken

    Broken is Zilla.

    Sunday, October 17th, 2004
    8:04 pm
    [thefowle]
    mypost.html?ljuncut
    hello all. been idling here forever, and i have something useful for once i actually tried doing. i have a really badass thread on semantic web & niche media markets but i wanted to post a link to the page which would show the far-more-concise lj-cut version. i resorted to posting to the day of entry. the basic premise is i can do mypost.html?ljuncut and it'll look like it would from a friends page or from a journal page.

    this is a simple patch to take the arguments and feed them into LJ::CleanHTML::clean_event from within EntryPage_entry. this is completely untested code in every manner possible, but i believe how i think would be done. its near the tail end of a long call stack - very nice form their lj coders, thanks - and fairly autonomous, so it looks ok.
    on line 262 of the august 25 2003 snapshop, insert:
    # show a cut edition of your page in post form.
    if ($opts->{'getargs'}->{'ljuncut'}) {
    $entry->{'props'}->{'opts_preformatted'}->{'cuturl'} = 1;
    }

    ideally it should also display something indicating that it is a cut page?

    first time really digging through lj code. very nice. took a bit of cross referencing (mainly verifying that getargs was in fact the uri's arguments, but far less effort than expected. well done!
    Tuesday, October 5th, 2004
    12:12 am
    [ashley_y]
    Trusting Styles

    Is any work being done towards the concept of trusting other people's S2 styles?

    Consider: you've friended me and I happen to look at your friends page. On it I see my own entries, including private ones, but in your style. Unbeknownst to me your style "reports back" all my private entries to some server by loading images with my private data embedded in the URLs.

    Now, I trust the official styles not to do this, of course. And I trust my own styles. But I don't trust other styles...

    Saturday, October 2nd, 2004
    5:52 pm
    [cemcom]
    Community Membership Data
    Hey all, I'm working on doing some research with LJ community membership data. For communities with 1000+ members, FOAF data is incomplete and the Community Info page has a link to the directory, which I know is generally off-limits for scraping (not that I'd want to write the code to scape it anyway). Anyone have any suggestions for how I can get the full list of community members?

    Also, there doesn't seem to be an easy way to get 'member of' data w/o scaping User/Community Info pages... That data's not in FOAF or in the Friend-Data (fdata.bml) outputs. Do I just scrape the info page, or am I overlooking something?

    Thanks much.
    1:55 pm
    [whitaker]
    Community Rename Security
    With the resolution of Zilla Bug #593 and the subsequent disabling of community logins, communities are no longer able to use rename tokens, since that page requires the target user to be logged in. A user complained about this one day because they were able to purchase a rename token for their community, but then had no way to use it.

    2 solutions:
    1) Disable purchasing of rename tokens for communities (lame)
    2) Add authas support to /htdocs/rename/use.bml (good)

    The problem comes when we decide what the policy should be for using #2. Who should be allowed? Any maintainer? Only the original user? Some other criteria?

    We've been discussing this internally for a few days now and the basic consensus is that any community maintainer should be able to do the rename... and the responsibility will be on communities to only have maintainers that should actually have full admin access to the community. Obviously the decision made here will eventually have repurcussions throughout the rest of the site, as we run into this problem in other places... so I wanted to make sure this was well thought-out.

    All of the staff members seem to agree that allowing any maintainer is the right thing to do, but I just wanted to get some input from everyone to see if they have strong objections or better suggestions. Thoughts?
    2:18 pm
    [vampwillow]
    TTL on RSS feeds
    Is there a reason that the LJ code taking in RSS feeds ignores the TTL information?

    I've noted this because a feed I operate has the following in the User Info:

    Syndication Status: Last checked: 2004-10-02 06:14:33
    Next check: 2004-10-02 07:14:33


    ie. it is being checked hourly, yet the TTL set in the RSS datastream is

    <ttl>240</ttl>

    ie. should only be being checked every *four* hours.

    If a feed specifies a TTL shouldn't we actually use that information and not ignore it? (ie. 'be nice'!)
    Thursday, September 30th, 2004
    3:56 am
    [mart]
    Projects That Weren't

    This community is really quiet and dead, so I hope no-one will object to a bit of nostalgia. A little while back I was poking through the archives for this community looking for a specific entry that didn't come up on Google from what I remembered about it. It was quite entertaining to look through the old entries and see the projects that never went anywhere despite the excitement about them at the time, so I thought I'd share! :) In no particular order:

    ESN System

    The Event Subscription/Notification system was a project to generalise the mechanism for letting users know when specific things happen. Right now there is special code for sending comment notification messages and support messages and no support for subscribing to a specific entry or comment that you otherwise have no relationship with. I briefly wandered back into this a few months ago but didn't get especially far with it so I didn't post anything about it. My repeated insistance that this should be implemented instead of more special cases for different events is one of the main things that caused the phrase “Mart bug death” to be coined, I believe.

    NNTP Interface

    Brad was once enthusiastic about providing a newsreader interface to LiveJournal, but no-one else seemed to be interested in writing it. Since then, the need has mostly been supplanted by RSS/Atom aggregators.

    Entry File Attachments

    I don't have a good link for this, but I was reminded of it by reading about some of the S2 stuff. Just before S2 came about, it was planned to add support for attaching files to LiveJournal entries. In retrospect, it's probably a good thing that this didn't happen then because it could be done so much more nicely now with the general blobserver. It might be nice to revisit this at some point as it would certainly make posting entries with incidental images easier and nicer, and would generalise what's currently done with phoneposts.

    Separating Friend edge into Friend, Watch and Trust

    As far as I'm aware this is still a low-priority goal. The “friend” relationship is overloaded, meaning “This user is my friend”, “I want to read this user's journal” and “I trust this user” all at once. Currently we work around this pretty badly using hacks with friend groups. Back when Brad originally proposed it there was no general user relationship graph, but we have that now so this is really just an issue of how to present this to users and how to migrate the existing data.

    Trust System

    The trust system was intended to rate how “trustworthy” users are using a trust-graph system similar to the trust metric on Advogato. This idea was junked as it was too complicated and some users got scared of it for various reasons.

    The Great English Removal Project

    This was never quite as great as it was supposed to be. After an initial burst of excitement, the English-removal patches started conflicting with other changes and generally going stale and getting ignored, so to this day there are still parts left untranslatable as well as languages whose maintainers got bored and left the text mostly English anyway.

    LiveJournal 1.0

    A few times there were an attempts to make a 1.0 release of the LiveJournal Server software. However, these days it's generally accepted that LiveJournal's development is continuous and so it'll never really be “finished”. The offshoot components, such as memcached, S2, BML, MogileFS and Perlbal could, on the other hand, be packaged into releases, but it involves someone taking the time to write docs and package it up. memcached is the only component with a stable release at this time.

    It's interesting to see how things have changed. Most of the idle projects are still viable, but unfortunately LiveJournal doesn't have the resources to do “cool” things so much these days as it did in the early days when it was small and there was time for Brad (and later, the rest of us) to mess around. Maybe one day…

    Monday, September 27th, 2004
    3:02 pm
    [ghewgill]
    watching community friends
    I'm thinking about an application for a bot that wants to watch the posting activity of friends of a community. There seems to be a few problems with this:

    - I can't use the checkfriends protocol because doing so requires logging in, and communities don't have passwords.

    - I can't view the community's friends page using a minimal styleid, because communities act like free accounts and can't use user-defined public styles.

    - I can't retrieve the friends page as RSS because there is no RSS friends feed.

    This leaves me with the last resort, pulling the community's friends page as html and scraping it for the user posting info. I don't want to have to create a new user just to mirror the community's friends. Does anybody have any better ideas?
    Wednesday, September 22nd, 2004
    5:50 pm
    [mart]
    PhonePost over VoIP

    There are plenty of US phone numbers for PhonePosting, but since I seem to remember that it's all handled by VoIP, is there just a plain old IP interface to it that a user could connect to from an IP phone? Admittedly this won't be amazingly useful until someone figures out how to do mobile VoIP phones, but it would certainly make it cheaper (read: free!) for me to make a PhonePost from my house.

    12:07 pm
    [sweet_daddy]
    Fix for random.bml problem
    Greetings lj_developers.

    I've just joined this community for the purpose of making the following post.

    I recently opened a support ticket because I was finding that over a period of a few weeks, random.bml was showing me some of the same journals again and again.

    In real life I build artificial life simulations, and so I have developed something of a keen eye for finding bugs in random algorithms, which can often appear to be working when they actually aren't doing exactly what you want. I was pretty sure something was wrong with the random dataset. In the ticket, [info]isabeau posted an excellent summary of the algorithm for random.bml. Based on his description I was able to go into the public CVS browser and have a look at the algorithm for myself.

    I believe that there is a weakness in the random picking of journals done by the build_randomuserset subroutine (in the stats.pl file). I understand the motivation for creating a table once a day of the IDs of 5000 journals that will be displayed by the random link that day. The problem is that the way the 5000 journals are selected, the 5000 will always tend to be drawn from a small subpool of the journals. Here is why:

    The SQL query being used to create the 5000 is as follows:

       $dbh->do("REPLACE INTO randomuserset (userid) " .
                "SELECT uu.userid FROM userusage uu, user u " .
                "WHERE u.userid=uu.userid AND u.allow_infoshow='Y' " .
                "AND uu.timeupdate > DATE_SUB(NOW(), INTERVAL 1 DAY) LIMIT 5000");

    What this query does (apart from filtering those journals not in the directory) is select the first 5000 journals in table order which have been updated within the last 24 hours. Given that your table order probably doesn't change from day to day, those regular posters who happen to fall early in the table order will tend to always be included in the 5000 journals available to random.bml. Those users who may be regular updaters but fall in the second 5000 or later will never be selected for the randomuserset table because they will be cut out by the LIMIT, and therefore will never be selected by random.bml.

    Now, [info]isabeau wisely said that this is not likely to be a priority for the development team, and that any solution would need to not increase the load on the database. I decided to look into the code myself. Actually, the fix is super-simple. What you really want are a random 5000 rows of the query you've already got, instead of the first 5000 rows. So the fix is to add three words to your query: an ORDER BY clause making it:

       $dbh->do("REPLACE INTO randomuserset (userid) " .
                "SELECT uu.userid FROM userusage uu, user u " .
                "WHERE u.userid=uu.userid AND u.allow_infoshow='Y' " .
                "AND uu.timeupdate > DATE_SUB(NOW(), INTERVAL 1 DAY) ORDER BY RAND() LIMIT 5000");

    For a reference to the legitimacy of this approach see the rand() documentation here (seems to be down today, see it here in the google cache). Actually, this does incrementally increase the load on the database at the moment of populating the randomuserset table because MySql doesn't do a great job of optimizing this kind of ORDER BY clause, but it's a once-a-day job and shouldn't be horrible given your query.

    I believe that this simple change will increase the goodness of the random.bml function by a huge amount, because suddenly a large number of journals will qualify for the random wheel that have never been available before through the random link. Although it may not be important from the point of view of the casual browser having something interesting to read, it may be important from the point of view of lj users wanting their journals to be casually read.

    I hope you'll agree with me, and that someone will take the time to add 3 words to livejournal/bin/maint/stats.pl I'm happy to write more about this if I've been unclear. Keep up the good work with lj coding generally.

    Regards,
    Colin
    Monday, September 13th, 2004
    11:23 am
    [jproulx]
    Monday meeting / Week todo
    Michael:
    • finish "objectification" of fotobilder this week
    Jesse:
    • "shitload of docs" to check in
    • Figure out why xslt is broken on danga
    • FotoBilder helpurls, interface -> docs
    Junior:
    • work w/ Brad to get everybody to dversion 7.
    • work w/ Brad to get Danga::Socket live
    • wrapper around Perlbal::run to eval { } and on error, write to syslog the error message?
    • public explanation of suspensions (schema)
    • migrate captchas/userpics/phoneposts to mogilefs, drop blob server
    Lisa/Brad:
    • new cluster setup (Wednesday?)
    Mahlon:
    • OpenSSI research (need spare computers?)
    • GSM WAV wrappers
    • S2 Urls for "snarf"ing
    Brett:
    • memcached configuration file and wrapper/restarter
    Whitaker:
    • Finishing up exif support for FB
    • Nokia camera phone research:
      • GSM
      • docs
    • FotoBilder upload protocol updates
    David:
    • zilla
    • setting up dev server at college :-p
    Global database:
    • big table offenders: s2*
      • research clustering s2 styles
      • research compression for s2 tables
    • two globals
    Sunday, September 12th, 2004
    1:33 am
    [mart]
    S2 functions in leaf layers

    Currently it's impossible to create new global functions in a theme or user layer. The reason for this decision was to prevent the situation where later the core layer or a layout layer gets a new function which has the same name and parameters as one in a user or theme layer but a different purpose, thus breaking any styles using that user or theme layer.

    Right now we have a hack in place allowing layouts to declare new class methods as long as they have a name starting with lay_. This hack only applies to this specific case. I'd like to propose a more general solution: rather than little hacks to handle cases like these, let's say that any layer can create any function or method it likes as long as it starts with its layer type. User layers would have to call their functions user_something, and similarly theme layers would use theme_something (or Page::theme_something. Since there can only be one of each layer type in a style, there is no chance of conflict.

    We can also extend this to declaring new classes. Right now any layer can declare a class, but this shouldn't really be the case and in certain cases it will break S2's inheritance model which relies on all classes being declared before functions are declared in order to correctly resolve the inheritance heirarchy. Unfortunately, this means that classes would end up with names starting with lowercase letters, like layout_FunkyEntry, which violates the coding style guidelines. I'm not sure whether this matters or not, really. Of course, to avoid the inheritance problems we must prevent layers declaring classes which inherit from classes in previous layers, which might be a pain since right now the compiler just retains one unified checker for the entire style so far.

    We can keep the current lay_ special case around for compatibility in addition to the general layout_, but I'd like to stop other layers declaring undecorated classes even though this breaks backward compatibility. I'm hoping that there aren't that many people out there relying on non-core classes and that they'll be people who will understand the error message that results when they recompile such layers. The layers will go on working for now, but it will be impossible to recompile them without fixing them, so we won't just get bajillions of dead journals as soon as the code goes live.

    As ever, I'd like to invite discussion on this.

    Wednesday, September 8th, 2004
    7:28 pm
    [mart]
    S2 User Manual

    It's obvious that something needs to be done about the S2 manual. At the moment it's a bit of a confused mess, not really sure whether it's LiveJournal user documentation, S2 technical documentation or a technology whitepaper. I think the only way we can deal with this is to split it off into two separate manuals. The user-centric one, called the “S2 User Manual” will start from the basics and eventually work its way up to describing the entire system and language. The “S2 Developer Manual” will describe the system with the assumption of technical knowledge and will probably include much of the current S2 manual as well as some documentation on the S2 runtime, how to embed S2 into web applications and so forth.

    I made a start on a possible S2 User Manual as an example of what I mean. As usual, I must disclaim that writing documentation for users isn't what I normally do, although I think I'm getting better at it as I continue to write these kinds of things. With that said, then, I'd like it if people would have a read of the introduction and first chapter so far and comment on it. In particular, are there places where I'm too patronising or where I move to fast and skip over necessary detail? Developers probably aren't the best people to ask about this, but I thought I'd ask here first because until I've got a bit more guts most users probably won't “get it” yet. (it doesn't really tell you much at the moment)

    Tuesday, August 31st, 2004
    7:52 pm
    [rosefox]
    Odd things around the 750-friend limit
    Apparently I hit my 750-friend limit. Then I joined a community and checked the "add community to friends list" box... and it did. "Friends: 753: View Friends." When I try to use add.bml to modify which filters the community is on, it gives me an error about having too many friends, but doesn't seem to remove the community from my friends list, and I can still move it around using editgroups.bml.

    Time to clean up my friends list, I guess! In the meantime, thought this might offer interesting things to poke at.

    Current Music: Clem Snide, "The Ballad Of David Icke"
    Friday, August 27th, 2004
    9:41 am
    [marksmith]
    Proposal: verifyitem protocol mode
    I propose a new protocol mode: verifyitem.

    Basically, it would look like this:

    Input parameters:
    journal: xb95
    itemid: 217819

    Output parameters:
    exists: 1
    public: 0
    comments: 0 (if entry is public)
    poster: xb95

    I've heard of a bunch of third-party sites/tools that scrape an entry directly to verify that it exists and to try to see how many comments it has. To generate those pages, lots of effort goes into loading a lot of information that isn't necessary for quick "does this exist?" checks.

    It'd be nice, I think, to have a protocol mode where you can say "hey, does this exist?" You then get back information on whether the post is public or not, how many comments it has, and who posted it.

    Because it works on ditemids, you can't scan for entries (easily). We can easily use memcache to rate limit these queries, too. If the user is going too fast, it returns an error saying that it's going to fast and should slow down and wait N seconds. This system would still require a valid user to do the query. (Perhaps? I think it should.)

    Would this be useful to any tool authors?

    (And note I haven't talked about this with anybody in the office, so this may get shot down here in the end, but I want to gather usage statistics and ideas before I propose it here.)
[ << Previous 50 ]
About LiveJournal.com