XMLParser is eating my whitespace

Asselpexags

New Member
I am losing significant whitespace from a wiki page I am parsing and I'm thinking it's because of the parser. I have this in my Groovy script:\[code\]@Grab(group='org.ccil.cowan.tagsoup', module='tagsoup', version='1.2' )def slurper = new XmlSlurper(new org.ccil.cowan.tagsoup.Parser())slurper.keepWhitespace = trueinputStream.withStream{ doc = slurper.parse(it) println "originalContent = " + doc.'**'.find{ it.@id == 'editpageform' }.'**'.find { it.@name=='originalContent'}.@value}\[/code\]Where inputStream is initialized from a URL GET request to edit a confluence wiki page.Later on in the withInputStream block where I do this:\[code\]println "originalContent = " + doc.'**'.find{ it.@id == 'editpageform' }.'**'.find { it.@name=='originalContent'}.@value\[/code\]I notice all the original content of the page is stripped of its newlines. I originally thought it was a server-side thing but when I went to make the same req in my browser and view source I could see newlines in the "originalContent" hidden parameter. Is there an easy way to disable the whitespace normalization and preserve the contents of the field? The above was run against a internal Confluence wiki page but could most likely be reproved when editing any arbitrary wiki page.Updated aboveI added a call to "slurped.keepWhitespace = true" in an attempt to preserve whitespace but that still doesn't work. I'm thinking this method is intended for elements and not attributes? Is there a way to easily tweak flags on the underlying Java XMLParser? Is there a specific setting to set for whitespace in attribute values?
 
Back
Top