Getting past the Facebook API review process with a TD installation

For the past weeks now I’ve been jumping high and low trying to get Facebook to grant me basic access to the posts of a public Facebook Page. Getting permissions needed to access this data requires Facebook to review your app.

The frustrating part here is that this process is entirely geared towards reviewing consumer apps for mobile or web, and with the idea that random users will use your app to authenticate with their own Facebook account.

I’m just trying to access these posts for an installation that displays the social posts of a single company on a single installation, with only one user which is their own Facebook account.

Facebook requires you to send them login details and a copy of the app to run through the process themselves and test it on various devices, which given that its a TD app made to run on a beefy computer and without any publicly accessible login screens they decline my request for access.

My question is, has anyone successfully been able to complete this process and get permissions for a similar project? Or what (reliable) workaround have you been able to use?

Lots of thanks and appreciation to anyone that has any pointers.

Even though we talked about it already wanna give my 2 cents for the public to:
I think, tbh, the best way is to instead use a Social-Media-Aggregator instead of fumbling arround and building it on your own.
There are quite alot but this one seems quite promissing as it directly built on the purpose of a rest-api:
https://www.socialmediawall.io/developer

The free-tier looks also pretty promissing for a start, and 7$ per month seems to be resonable price.
Caviat about mediawall: Seems like full API-Access costs 75$ Dollars. Which still can be ok if used on more then one setup though,

or scrape the social media wall page :sweat_smile:

For these cases I normally also use an social media aggregator service. Much better customer service :wink:
In the past I’ve had success integrating the API of https://www.tintup.com into TouchDesigner, they connect to almost every social media platform, and have advanced options & whitelabel apps/sites you can give to your client for content curation / filtering, (which is more useful for large scale setups).
They also have prices per day / event / installation afaik.

Tricky thing is that its for a long term installation, not sure how easy it will be to sell the client on a monthly fee of that price. If there’s no other option then of course I’d to for that, but for the moment I’m still looking at alternatives. Especially since the only thing that I can’t get so far is this single public facebook page.

although this post is really a lot more about the Facebook API rather than scraping, I figured maybe sharing what I’ve thrown together atm. Hopefully it helps any other members going down this path.

from bs4 import BeautifulSoup

from selenium import webdriver

from selenium.webdriver import ActionChains

import time

import os

import urllib.request

#from  secrets import username, password

url = 'https://www.facebook.com'

driver = webdriver.Firefox()

driver.get(url)

time.sleep(2)

cookies = driver.find_elements_by_xpath("//button")

for btn in cookies:

    # print(btn.get_attribute("innerHTML"))

    if(btn.get_attribute("innerHTML") == "Alle akzeptieren"):

        btn.click()

#ActionChains(driver).click(cookies).perform()

email = driver.find_element_by_name("email")

pswrd = driver.find_element_by_name("pass")

email.send_keys("username@email.com")

time.sleep(1)

pswrd.send_keys("password")

time.sleep(3)

email.submit()

time.sleep(10)

driver.get('https://mbasic.facebook.com/nike')

pageContent = driver.page_source

soup = BeautifulSoup(pageContent, 'html.parser')

matches = soup.findAll('article')

for m in matches:

    # get photo

    p = m.findAll('img')

    for _p in p:

        if 'static' not in _p['src']:

            print(_p['src'])

            # lets also save it

            filename = _p['src'].split("/")[-1].split("?")[0]

            print(filename)

            my_path = 'cache/'

            urllib.request.urlretrieve(_p['src'], os.path.join(my_path, os.path.basename(filename)))

            

    print('----')

    # get text

    s = m.findAll('p')

    for _s in s:

        print(_s.get_text())

    

    print('----')

    # get date

    f = m.find('footer')

    d = f.find('abbr')

    print(d.get_text())

    print('__________')