I'm trying to scrape a site driven by some user input. For example, the user gives me the pid of a product and a name, and a separate program will launch the spider, gather the data, and return it to the user.However, the only information I want are product and person which are found in two links to an xml. If I know these two links and the pattern, how do I build the callback to parse the different items?For example, if I have these two Items defined:\[code\]class PersonItem(Item): name = Field() ...class ProductItem(Item): pid = Field() ...\[/code\]And I know their links have pattern:\[code\]www.example.com/person/*<name_of_person>*/person.xmlwww.example.com/*<product_pid>*/product.xml\[/code\]Then my spider would look something like this:\[code\]class MySpider(BaseSpider): name = "myspider" # simulated given by user pid = "4545-fw" person = "bob" allowed_domains = ["http://www.example.com"] start_urls = ['http://www.example.com/person/%s/person.xml'%person, 'http://www.example.com/%s/product.xml'%pid] def parse(self, response): # Not sure here if scrapping person or item\[/code\]I know that I can define rules too using \[code\]Rule(SgmlLinkExtractor())\[/code\] and then giving the person and product each its own parse callback. However, I'm not sure how they apply here since I think rules are meant for crawling deeper, whereas I only need to scrape the surface level.