Home |
Search |
Today's Posts |
#1
![]()
Posted to microsoft.public.excel.programming
|
|||
|
|||
![]()
I'm using code I found here (thank you) to load the html source of a
website into a string. My problem is that it doesn't always return the entire webpage. Sometimes I only get a partial / incomplete version of the html. Is there any way to make sure I get it all or to throw an error if Internet Explorer didn't get the whole page? Maybe I could just search for "</html" in the string and then recall the function if it's not found, but there could be more than one </html in a webpage's source or there could be text after it I would miss. Anyone have a better idea? Thanks. Here's the code: Public Function sGetHTML(rsURL As String) As String Dim objIE As Object Set objIE = CreateObject("InternetExplorer.Application") With objIE .Navigate rsURL Do Until Not .Busy DoEvents Loop With .document If Not (.URL Like "res*") Then sGetHTML = ..documentelement.innerhtml End With .Quit End With Set objIE = Nothing End Function |
Thread Tools | Search this Thread |
Display Modes | |
|
|
![]() |
||||
Thread | Forum | |||
saving as html | Excel Discussion (Misc queries) | |||
Copying HTML Data From Website problem | Excel Discussion (Misc queries) | |||
saving ppt as html | Excel Discussion (Misc queries) | |||
Saving in html | Links and Linking in Excel | |||
Saving in html | Excel Discussion (Misc queries) |