The way I mainly collect wallpapers is by torrents. If all options fail, then go to any website that hosts the wallpapers and download them one at a time. I have a personal record of downloading almost 700 wallpapers at stretch. It is a tedious boring job stuck in a "right click-open in new tab - right click - save picture as".
Recently I had to download wallpaper of a particular artist, and I looked around for a torrent only to find stale links. So I was finally left with the option of downloading the papers manually.
Now people who know me, know that I'm one lazy ass. So entering the loop again was something I didn't want to do.
Recently I've been having an "affair" with Powershell, and it seemed like an opportunity to .. well further.. you know..
Disclaimer :
This is a written by a lazy ass, to download wall papers from a website and not to saving the world. So the script is dirty and fragile. It targets a particular scenario. For me that is how it is suppose to be. The spirit of scripting for me is the quickest way of getting things done. So the script doesn't have any error checking and even if something is slightly different it'll fail.
Running A Powershell Script :
If you have Windows 7 or Windows 8, Powershell is installed by default. For earlier versions of Windows, you need to download and install Powershell.
To run Powershell, start a command prompt, and execute command "powershell" (without quotes).
Copy the script to a file and save the script with a .ps1 extension.
In Powershell prompt, execute "Set-ExecutionPolicy -ExecutionPolicy unrestricted". This is to allow us to execute scripts.
The code downloads the wallpaper from the site xtremewalls.com, and places it in a folder created with the name of the artist in the same folder. I would search for the artist in the site, and grab the url and provide as input. This will retrieve images in the current page only. we have to rerun the script for page2. (as I said, dirty way of doing it but hey it works.. most of the times..)
<# download wall paper from : http://xtremewalls.com/
They have a good collection of paper, good organization
http://xtremewalls.com/category/hollywoodf/<nameOfArtist>/<pageNumber>
http://xtremewalls.com/category/hollywoodf/ashleesimpson/1
Each page contains a number of links, with varying resolution. we need to
parse the links for highest resolution, then turn over to next page.
cumbersome, but hey, gatta do what ya gat ta doo
#>
# get link to fisrt page
if($args.Count -lt 1)
{
Write-Host -ForegroundColor Red "Invalid syntax"
Write-Host -ForegroundColor Red "<scriptname> <urlToFirstPage>"
Write-Host -ForegroundColor Red "eg: <scriptname> http://xtremewalls.com/category/hollywoodf/ashleesimpson/1"
return 1
}
$url = $args[0]
# now extract artist name and category
$tind = $url.LastIndexOf("/")
$url.Substring(0,$tind)
$tstr = $url.Substring(0,$tind) # -1 to remove /
$tind = $tstr.LastIndexOf("/")
$artist=$tstr.Substring($tind+1,$tstr.length-1-$tind)
$tstr = $tstr.Substring(0,$tind)
$tind = $tstr.LastIndexOf("/")
$category=$tstr.Substring($tind+1,$tstr.length-1-$tind)
# save path
$saveloc=Get-Location
$saveloc = Join-Path $saveloc -ChildPath $artist
if(Test-Path $saveloc)
{
"save Loc exists"
}
else
{
mkdir $saveloc
}
# get webpage
$wc = New-Object net.webclient
$webpage=$wc.DownloadString($url)
# split the page into lines
$lineArray = $webpage.Split("`r`t")
$lineArray = $lineArray | Sort-Object -Unique
# prepare linkstring used for matching
$matchLink = "http://xtremewalls.com/wallpaper/"
$matchLink = [string]::concat($matchLink, $category);
$matchLink = [string]::Concat($matchLink, "/");
$matchLink = [string]::Concat($matchLink, $artist);
$matchGroupLink = $matchLink;
$matchLink = [string]::Concat($matchLink, "/\d*/\d*x\d*");
$matchGroupLink = [string]::Concat($matchGroupLink, "/\d*");
# extract links
$linkArray = [regex]::Matches($lineArray, $matchLink)
# we have links. Now group links by number
$linkgroupArray = [regex]::Matches($lineArray, $matchGroupLink)
$linkgroupArray = $linkgroupArray | Select-Object -Unique
# Now for each group link, get the link with highest resolution
for($i=0; $i -lt $linkgroupArray.Length; $i++)
{
$grouplink = $linkgroupArray[$i];
$allResolutionLinks = $linkArray -match $grouplink
$allResolutionLinks = $allResolutionLinks | Sort-Object -Unique
# get the link to highest resolution
$hires=0
$link
for($j=0; $j -lt $allresolutionLinks.Length; $j++)
{
$res = [regex]::Matches($allResolutionLinks[$j], "\d+$");
$res = $res[0].Value
if([int]$hires -le [int]$res)
{
$hires=$res
$link = $allResolutionLinks[$j].Value
}
}
# Now that we have the link, lets download
$jpgPageString = $wc.DownloadString($link.ToString())
$jpgPageLines = $jpgPageString.Split("`r`n")
$jpgLines = $jpgPageLines -match ".jpg"
$jpgLinks = [regex]::matches( $jpglines, "img src=(?!.*img.*).*$category/$artist.*jpg")
# blindly take the first guy
$jpgstr = $jpgLinks[0].ToString()
$linkext = $jpgstr.Substring(9, $jpgstr.Length-9)
# download string
$dstr = [string]::Concat($link, "/")
$dstr = [string]::Concat($dstr, $linkext)
$dstr
#extract filename
$dind = $dstr.LastIndexOf("/")
$filename = $dstr.substring($dind+1,$dstr.Length-$dind-1)
$savepath = Join-Path $saveloc -ChildPath $filename
$savepath
# FINALLY DOWNLOAD THE DAMN PIC #
$wc.DownloadFile($dstr, $savepath.ToString())
}