Forums

http request issue

This is extremely weird. when i invoke below function from editor console it works but when the same is invoked from Scheduled tasks is not working and i am getting request time out issue. I not able to understand why this is happening, please help me understand this.

def get_available_index_option_strikes_nse(underlying: str, expiry: str) -> dict:
    strikes_available: dict = {}
    try:
        headers = {
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36",
            "Accept-Encoding": "gzip, deflate, br", "Accept-Language": "en-US,en;q=0.9"}
        session = requests.session()

        res = session.get(f"https://www.nseindia.com", headers=headers, timeout=60)
        res = session.get(f"https://www.nseindia.com/api/option-chain-indices?symbol={underlying.strip().upper()}",
                          headers=headers,
                          cookies=dict(res.cookies), timeout=60)

        if res.status_code == 200:
            data = res.json()
            for strike in data["records"]["data"]:
                if strike["expiryDate"] == datetime.strptime(expiry, common_functions.DATE_FORMAT).strftime("%d-%b-%Y"):
                    strikes_available[strike["strikePrice"]] = {"CE": strike.get("CE"), "PE": strike.get("PE")}
        else:
            return get_available_index_option_strikes_nse(underlying, expiry)
    except Exception as e:
        print(e)


    return strikes_available

What is the actual error and which line is it raised from?

No errors

Below gets timed out

res = session.get(f"https://www.nseindia.com", headers=headers, timeout=60)

My guess then, is that nseindia is detecting the access as scraping and blocking it in some way.

Thanks for the clarification.