single line limit of 438196?

Hi,

I’m using the web dat (077) to fetch the output of an api on the net, which returns a single line of text, no carriage returns.

the line is truncated to 438196 characters.

Also, if I drop that line - which I stored also in a text file - into a text dat (i.e. by drag dropping the file onto a touch network pane) the line is also truncated at the same length.

Is that a hard limit? It seems very weird but both the text dat and the web dat behave the same!

I also encountered such kind of limit into parameters fields.

Example, in putting a long array in the “selected row values” parameter of the select DAT.

anyone knows of a workaround? This is a very hard limit when dealing with unknown web content

Hi Dani,

Did you try downloading the page with python ?

Create two texts DAT:

text1:

import urllib.request
opener = urllib.request.FancyURLopener({})
url = "http://www.derivative.ca"
f = opener.open(url)
content = op('text2').write(f.read())

text2:

Will receive the data when running 'text1'

In case it still doesn’t work you could do that:

import urllib.request
import textwrap

opener = urllib.request.FancyURLopener({})
url = "http://www.derivative.ca/"
f = opener.open(url)
content = f.read()
data = textwrap.wrap(str(content),width=1000)
for lines in data:
	op('text2').write(lines + "\n")

But I warn you, it’s not realtime: perhaps two-three minutes to process.

This should be way faster with BeautifulSoup for python3:

import urllib.request
from bs4 import BeautifulSoup
opener = urllib.request.FancyURLopener({})
url = "http://www.derivative.ca"
f = opener.open(url)
content = BeautifulSoup(f.read())
content = op('text2').write(content.prettify())

I’m actually not able to reproduce this. For example fetching this file which is 1 meg
derivative.ca/temp/bigfile.txt
works fine. I think possibly it’s a special character in your text that’s causing it to get truncated?
Can you post an example?