Get access to example of workflows you can use and copy for your own application
Here you can find some exemples of analytical & agentic workflows that can be built & executed to analyse complex connected data with TuringDB
Unfortunately, the open source workflow LangGraph wrapper could not be added to the August2025 release in time, but it will be made available in the September2025 release.
Right now the workflows can be used in our TuringDB cloud
Here we take multiple clinical notes documents (for the exemple we are using synthetic data), to uncover relationships between treatments, markers, health events, symptoms, clinical outcomes of a patient across multiple encounters.📔 Notebook exported as HTML
Here we study the supply chain of Apple trying to see the interaction and dependencies on different countries & regions for different parts (e.g. batteries, screens, etc) of Apple products. The reference document used is a report from the American Enterprise Institute📔 Notebook exported as HTML
Integrate a simple relational database to be transformed in a graph database structure to visualise, analyse, and reason on the graphs.Example with our partner antibody database from CiteAb (sample database):From table structure to graph📔 Notebook exported as HTML
Here we define a list of stock tickers from the ARKK ETF to automatically query them from a finance data API (i.e., Polygon or FinancialData) to then be analysed as a correlation graph of the stocks and as a dependency graph of the companies in the ARKK ETF portfolio.📔 Notebook exported as HTML
👨🏻💻 Workflow code
Copy
Ask AI
%%timebuilder = FlowBuilder()# S3LoadFile nodes3_loader = builder.add_node( S3LoadFile.Node( params=S3LoadFile.Params( output_field="pdf_base64", user_id=user_id, file_key=file_key_pdf_input, file_type="pdf" ) ))# ExtractTextPDFpdf_text_extractor = builder.add_node( ExtractTextPDF.Node( name="ExtractTextPDF", params=ExtractTextPDF.Params( input_field="pdf_base64", output_field="pdf_text" ) ))# LLMllm = builder.add_node( LLM.Node( name="LLM", params=LLM.Params( input_field="pdf_text.all_content", output_field="list_companies_tickers", api_key_anthropic="<your_api_key_anthropic>", llm_provider="Anthropic", system_prompt=""" Your role : - Extract companies tickers from this input text - Return the extracted companies tickers as a list Very important : - Return only this list, no other explanations """, output_format="list" ) ))# ForEachfor_each = builder.add_node( ForEach.Node( params=ForEach.Params( list_field="list_companies_tickers", offset_field="i" ) ))# FinancialDataRestAPI : stock pricesfinancial_data = builder.add_node( FinancialDataRestAPI.Node( params=FinancialDataRestAPI.Params( output_field=DataField(field="$stock_prices", action="append"), api_key=financial_data_api_key, endpoint_type="stock_prices", identifier=DataField(field="$list_companies_tickers[$i]", action="set"), #offset=0, #format="json" ) ))# OutputText - Get all dataout = builder.add_node( OutputText.Node( params=OutputText.Params( input_field="$stock_prices[$i]", output_field=DataField(field="$stock_prices_all[$i]", action="append") ) ))# Connect nodess3_loader.connect_to(pdf_text_extractor)pdf_text_extractor.connect_to(llm)llm.connect_to(for_each)for_each.connect_to(financial_data)financial_data.connect_to(out)out.connect_to(for_each)# Build flowpipeline = builder.build()# Execute flowresults = pipeline.execute()# Show pipeline imagepipeline