Skip to content

{ zotero“” } Search Results

Accessing Zotero via Chickenfoot: a warm up exercise

I'm currently learning how to program Zotero, specifically how to integrate Zotero with other applications. I document my learning experience to make it easier for others to learn what I've learned. Note that I'm still learning (I'm far from an expert) — so I think my advice will improve over time. But it decided not to wait until I am an expert before I share my experiences.

You can write Zotero plugins (a Firefox extension that accesses the Zotero extension) as a way of extending Zotero — see plugins [Zotero Documentation].

In a series of posts, I would like to explore how to use Chickenfoot as a scripting framework for Zotero and as a way to explore a working Zotero instance. Why Chickenfoot? Chickenfoot is appealing because it's the closest thing we have to a Web automation framework running within a web browser — and hence something that can take advantage of all the scriptability and context of the Web browser. (Besides, much of the work we do with Chickenfoot will be transferable to writing a Zotero plug-in.)

A goal I've set for myself to focus my learning — and to provide a narrative for the series: create a Chickenfoot script to grab all the references I added on a given date and format those items to be sent out as HTML, wiki formatting, and to be uplaoded to some social bookmarking systems, including delicious and Connotea.

Note: I'm using v 1.0.7 of Zotero, running on Firefox 2.0.0.17 on Windows XP. I'm also using Chickenfoot v 1.0.4. I assume that you will have some basic knowledge of JavaScript and Firefox extensions. (I plan to provide more background later.) The overall approach we take here is a combination of experimenting with bits and pieces of source code, combined with reading the original Zotero source code.

In this first post, we warm up by studying how to access with Chickenfoot Zotero and ZoteroPane, two important Zotero objects, and perform some basic tasks with them.

First, the Zotero object is arguably the main JavaScript object, one that gives you access to the underlying functionality of Zotero. You can access it in this way (see include.js):

var Zotero = Components.classes["@zotero.org/Zotero;1"].
getService(Components.interfaces.nsISupports).wrappedJSObject;

or, in the context of Chickenfoot script:

var Zotero = chromeWindow.Zotero;

(See Rewrite the Web with Chickenfoot [JavaScript & Ajax Tutorials] for a description of the chromeWindow, which is the top-level object of the Firefox browser.)

From a thread on the Zotero forums, I learned an incantation (courtesy of Dan Stillman) to access the ZoteroPane:

var wm = Components.classes["@mozilla.org/appshell/window-mediator;1"]
.getService(Components.interfaces.nsIWindowMediator);
var browserWindow = wm.getMostRecentWindow("navigator:browser");
var ZoteroPane = browserWindow.ZoteroPane;

It turns out but you can also use chromeWindow to get at ZoteroPane:

var ZoteroPane = chromeWindow.ZoteroPane;

What can we do with Zotero and ZoteroPane?

Zotero object in Chickenfoot
The first thing to do is to examine the two objects. You can examine them the output panel by toggling the output variable to see the children of the output object. For example, if you run the following code, you can get a list of all the children of Zotero and ZoteroPane:


function list_props(obj) {
var name, props;
var props;
props = "";
for (name in obj) {
props = props + " " + name;
}
return props;
}
var Zotero = chromeWindow.Zotero;
var ZoteroPane = chromeWindow.ZoteroPane;
list_props(Zotero);

to generate the following list:


init stateCheck getProfileDirectory getZoteroDirectory getStorageDirectory getZoteroDatabase chooseZoteroDirectory debug log getErrors getSystemInfo varDump safeDebug getString localeJoin getLocaleCollation setFontSize flattenArguments getAncestorByTagName join inArray arraySearch arrayToHash hasValues randomString getRandomID moveToUnique initialized skipLoading startupError startupErrorHandler Prefs Keys Hash Text Date Browser UnresponsiveScriptIndicator WebProgressFinishListener JSON DBConnection DB Schema Item Items Notes Collection Collections Creators Tags CachedTypes CreatorTypes ItemTypes FileTypes CharacterSets ItemFields getCollections Attachments Notifier History Search Searches SearchConditions Ingester OpenURL Translate Cite CSL QuickCopy Report Timeline Utilities Integration File Fulltext MIME ItemTreeView ItemTreeCommandController CollectionTreeView CollectionTreeCommandController ItemGroup ProgressWindowSet ProgressWindow Annotate Annotations Annotation Highlight version isFx2 isFx3 platform isMac isWin isLinux locale dir Maps IconFactory

Calculating the number of top level items

The following simple Chickenfoot script returns the number of top-level items in my Zotero collection:


var Zotero = chromeWindow.Zotero;
var items = Zotero.Items.getAll(true);
items.length + 1;

Toggle the Zotero Panel

The following two-liner shows how to turn the Zotero display on and off (equivalent to hitting Ctrl-Alt-Z)

printing the title of the first selected item

Finally, a three-liner to print out the title of the first selected item:


var ZoteroPane = chromeWindow.ZoteroPane;
var selectedItems = ZoteroPane.getSelectedItems();
var title = selectedItems[0].getField("title");
title;

Conclusion

This post is meant only to lay the foundation for the ultimate goal which is to integrate Zotero and other systems using Chickenfoot. I hope it also whet your appetite for more — and if you're a Zotero user, that you would go right away to install Chickenfoot to explore Zotero in a deeper way.

Tagged ,

Zotero developer docs online

Just hot off the press: start Zotero Developer Documentation. Developers, start your engines!  (I've printed them out already and am already pondering how I'm supposed to add data from an external process without directly touching the zotero.sqlite file….)

I installed the coins-metadata plugin

I just installed coins-metadata plugin for my various WordPress blogs so that Zotero can pick up the embedded OpenURL ContextObject in SPAN (COinS).

Tagged ,

Sorin Matei on Project Bamboo and the role of mashups

Project Bamboo has been on my list of stuff to write about for a while. According to Project Bamboo website:

    Bamboo is a multi-institutional, interdisciplinary, and inter-organizational effort that brings together researchers in arts and humanities, computer scientists, information scientists, librarians, and campus information technologists to tackle the question: How can we advance arts and humanities research through the development of shared technology services?

Not only is the project of intellectual interest to me (as someone deeply interested in the issues of "shared technology services") but also of great personal interest (since I know quite a few of the personnel involved with the project, including one of the co-project directors, David Greenbaum, who used to be my boss.) One particular angle I hope to explore is answering the question of what are the implications of Project Bamboo on Zotero and vice-versa?

The immediate prompt for this post is Sorin Matei's The Bamboo Digital Humanities Initiative: A Modest Proposal. Matei's post has been of sufficient interest to me that I using it to prompt some discussion in a community of humanists and technologists. Matei makes a lot of useful points, but the segment that caught my attention is:

    The role of the Bamboo platform would be to simplify this task by making access to tools, by enhancing our ability to connect digital objects and artifacts, our ability to connect with colleagues and students via simple, directly intuitive and universally available interfaces that all converge on the scholars’ desktop, preferably in the format of a word processor. [emphasis mine] Moreover, the platform should integrate in the most straightforward manner the learning and writing processes with those dedicated to publishing. This should be done in such a manner that dedicated genres and modus operandi (articles, book monographs, peer review, scientific validity checks, etc.) would survive, flourish even, under the new digital regime.

Amen. That's an approach I've been pursuing for a while now (in the Scholar's Box, for example)– and one I think that Zotero, as a desktop client, with some capacity for extensibility, can embody rather deeply.

Matei goes on:

    I stop here, rather abruptly, waiting for reactions. I am planning, however, to release a sketch of such a platform, including essential services and affordances. It will also try to leverage the idea of the mashup editor as basic architecture strategy, which could be use to support the infrastructure of the system.

I'm naturally intrigued as someone focused on mashups and interested in developing "Zotero as a mashup platform". Has Sorin Matei used Zotero? How would Zotero fit in with Matei's sketch of such a platform?

Tagged ,

Some musings on where I'd like to go next professionally

In January, a correspondent, having heard that I was about to publish a book on mashups, wrote me, saying that he would "love to find out more what [I'm] thinking". Flattered to be asked, I replied. Here I quote an edited version of what I wrote. (I tend to like what I write in email because my writing tends to be energetically conversational.)

Let me tell you a bit of what I'm thinking and where I'm coming from. Obviously, I think that the topic of mashups is a big deal given my willingness to write a whole book about it. The element that excites me most is the power that individuals and small groups of people now have to recombine data and services — to use mashups to make sense of the world — particularly in the corner of the world in which I'm immersed (teaching, learning, and research in the context of higher education, libraries, and museums). When I first learned about XML and web services, I thought — wow — this is going to change the way we do research and way we teach and learn. I spoke about this topic at the O'Reilly ETCon in 2003.

I've built a research prototype (called the Scholar's Box) to enable scholars to gather data from different sources, create personal collections, and share them with others. (I'm an advisor to a project called Zotero (http://www.zotero.org/) — which provides a Firefox plugin to enable people to manage bibliographic collections within the web browser — and ultimately to share their collections.)

I teach a course at the School of Information at UC Berkeley call "Mixing and Remixing Information". This semester will be the third term I teach the course. It's a project-based course, in which the focus is on helping students build their own mashups (see http://blog.mixingandremixing.info/s08/class-projects/ for some mashups from [this] year's class) . A good number of my students have next-to-no experience with web programming. I have found that showing students the power of mashups — to get people excited about the possibilities — and then teach them how to make mashups is an excellent way into web programming. I've taken this approach with teenagers with some success last summer — I taught a six week course on the Berkeley campus.

In addition to master's students this semester, I'll be teaching a six week hands-on course to campus IT staff about building next-generation campus IT services — again by studying things like Flickr and Google maps and Yahoo! Pipes, getting them to build mashups, and thinking about how we can do things like that on campus — for administration and for research.

Now that I'm finished writing my book, I'm thinking about other opportunities. Perhaps it's just the geek in me, but I really do think that some combination of Web 2.0 mashups, a bit more rigor from SOA, imagination, and some understanding of real problems can transform the worlds of education and research (and other worlds too — but education and research are something I know about.) I'm setting out to build a small company whose goal is to help the educational community effectively use Web 2.0 ideas (with a specific emphasis on remixability) to change the way we do things in that community. I will confess that my business plan still needs to be written, however…. In the meantime, I'm experimenting with a mix of teaching, consulting, and building software. (Some collaborators and I have a grant proposal in to enhance the teaching and learning of art history by integrating Flickr into the computational fabric of the classroom.) Most of all, I believe in the power of ideas — hence, I wrote a book to teach others.

Lots of questions remain however. (Now that my teaching jobs have come to an end, I now have some serious amounts of time to plot out my next steps. Writing is a great help to me in sorting out my thoughts, especially when I'm writing for a public audience. I would like to build a business but am unclear on exactly what it should look like. Undoubtedly, there will be details that would be unwise for me to share publicly– but I believe that a lot of my thinking would benefit from putting my ideas out there.

What I've been up to

Here's an update on my current professional activities that I hope will give you, my readers, a sense of where this blog will be heading:

  • My book Pro Web 2.0 Mashups: Remixing Data and Web Services was published by Apress on February 25, 2008.  It's gotten some good reviews, and I've heard from some happy readers. It's time, however, for some more intense promotion of my book to make sure it fully reaches the audience it is meant to serve. (Most of my book-related activities will be discussed at my MashupGuide blog.)

  • In April, I finished teaching a six-week course (“Building Next-Generation Campus Information Services” for IT staff on the Berkeley campus. “The course designed to introduce campus professionals to the concepts of Web 2.0, XML, web services, and elements of web application development through the lens of mashups. While completing a six-week long project, participants will advance their knowledge and abilities, and gain insight into potential solutions to the information management needs they face on the job.” I plan to post more details about the course, including how it was structured, what projects came out of the class, and how I think this course can be improved.

  • Last week marked the culminating open house of the Mixing and Remixing Information course I teach at the School of Information at UC Berkeley. I had a blast teaching the course for the third time though I wonder whether it's time for a total (or at least substantial ) revamp of the course.

  • I've started to contribute regularly to ProgrammableWeb, which I described in my book as “the most useful web site for keeping up with the world of mashups, specifically, the relationships between all the APIs and mashups out there."  That was before I started writing for it!  See the posts I've written for PW so far.

  • Finally, I've recently become the Integration Advisor for the Zotero Project, working on developing developer documentation for them, thinking about how to integrate Zotero with other things (in a sense, Zotero as a client-side mashup platform) — specifically in the context of Zotero-Internet Archive alliance.  My work for Zotero will be a big part of what I'll be discussing on this blog.

Tagged

A data architect on hiatus

Ever since I left my job as a data architect to focus on writing my book on mashups, I've not had much to say publicly about data architecture, especially as it applies to higher education and the world of libraries. Often, my posts have been in response to specific pieces of news that arrive on my desk in the course of my job. Now, since I have fewer immediate matters to which to react, I've been relatively inactive on this blog.

However, I do think a lot about some perhaps mundane problems that I face as I write my book, barriers that make it difficult to do research and to write up that research and present it on the Web. An example: even though I cite sources in my book, I've not figured out the best way to integrate Zotero (a bibliographic reference manager) into the writing process. I'm a tad embarrassed to admit that I've been formatting references by hand — even though I have a pretty good understanding of bibliographic reference managers and their potential benefits. (I used BibTeX in my Ph.D. dissertation.) How do I manage references that are scattered throughout my digital universe: my social bookmarks, in my Word documents, in my wiki and blogs….etc?

At any rate, please expect sporadic updates over the next months. Most of my blogging around my professional work will be happening on mashupguide.net. I will, however, write about ideas that come to me as I start to build up my consulting business around the use of XML, web services, and mashup-type thinking.

Positions at the Center for History and New Media at GMU

The Center for History and New Media at George Mason University, the folks behind Zotero, is hiring. They are doing wonderful work — check out the following list if you have any interest in the intersection of history and digital technology You can find the listings at

http://chnm.gmu.edu/news/archives/job_openings_postdoc_.php

which I quote here:

February 08, 2007

Job Openings – Post-Doc, Digital History Associate, Summer Intern

The Center for History and New Media is growing, and we are currently looking to fill positions at several levels:

Post-Doc in History of Science & Technology and/or Digital History: This is a one-year position (with possible renewal) at the rank of Research Assistant Professor at the Center for History and New Media (CHNM), which is closely affiliated with the Department of History and Art History at George Mason University. A PhD or advanced ABD in History or a closely related field is required. We are especially interested in people with some or all of the following credentials, but they are not required for the position: 1. experience in digital history or digital libraries; 2. strong technical background in new technology and new media; 3. administrative and organizational experience; 4. background in the history of science, technology, and industry, broadly defined; 5. background in post-1945 U.S. history. Please send letter of application, CV or resume, and three letters of recommendation (or dossier) to chnm@gmu.edu or Center for History and New Media, George Mason University, 4400 University Drive MS 1E7, Fairfax, VA 22030. Electronic submissions encouraged. Please use subject line "Digital Historian." We will begin considering applications 15 March 2006.

Digital History Associate: The Center for History and New Media (CHNM) at George Mason University is hiring two "Digital History Associates." We are seeking energetic, well-organized people who take initiative and work well collaboratively. We are especially interested in people with some combination of research experience, administrative experience, and web development and programming experience. These exciting, grant-funded positions are particularly appropriate for someone with combined interest in history and technology, but the only specific requirements are a BA by June 1, 2007, and a demonstrated interest in both history and the web. Please apply for position #10384Z online at jobs.gmu.edu and attach both a resume and a cover letter. We will begin considering applications on 3/15/07 and continue until the positions are filled.

Summer Intern – Humanities Computing: The Center for History and New Media (CHNM) at George Mason University is seeking creative, energetic, well-rounded, and well-organized college/high school students for 8-12 week paid summer internships in 2007 at a leading digital humanities center. Ability to work in a team is very important. Strong grades are essential. Preference will be given to those with working knowledge of one or more of the following: web-database development in PHP and MySQL; JavaScript, XML, CSS, and other technologies critical for Firefox development; and command-line Linux system administration. This is an especially good opportunity for someone with a combined interest in computing and history. Please send resume and cover letter with subject line: "humanities computing internship" to chnm@gmu.edu. We will begin considering applications on 2/15/07 and will continue until the position is filled.

About CHNM: Since 1994, the Center for History and New Media at George Mason University has used digital media and computer technology to change the ways that people—scholars, students, and the general public—learn about and use the past. We do that by bringing together the most exciting and innovative digital media with the latest and best historical scholarship. We believe that serious scholarship and cutting edge multimedia can be combined to promote an inclusive and democratic understanding of the past as well as a broad historical literacy that fosters deep understanding of the most complex issues about the past and present. CHNM's work has been internationally recognized for cutting-edge work in history and new media. Located in Fairfax, Virginia, CHNM is 15 miles from Washington, DC, and is accessible by public transportation.