FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Hfinger - Fingerprinting HTTP Requests

By: Zion3R


Tool for Fingerprinting HTTP requests of malware. Based on Tshark and written in Python3. Working prototype stage :-)

Its main objective is to provide unique representations (fingerprints) of malware requests, which help in their identification. Unique means here that each fingerprint should be seen only in one particular malware family, yet one family can have multiple fingerprints. Hfinger represents the request in a shorter form than printing the whole request, but still human interpretable.

Hfinger can be used in manual malware analysis but also in sandbox systems or SIEMs. The generated fingerprints are useful for grouping requests, pinpointing requests to particular malware families, identifying different operations of one family, or discovering unknown malicious requests omitted by other security systems but which share fingerprint.

An academic paper accompanies work on this tool, describing, for example, the motivation of design choices, and the evaluation of the tool compared to p0f, FATT, and Mercury.


    The idea

    The basic assumption of this project is that HTTP requests of different malware families are more or less unique, so they can be fingerprinted to provide some sort of identification. Hfinger retains information about the structure and values of some headers to provide means for further analysis. For example, grouping of similar requests - at this moment, it is still a work in progress.

    After analysis of malware's HTTP requests and headers, we have identified some parts of requests as being most distinctive. These include: * Request method * Protocol version * Header order * Popular headers' values * Payload length, entropy, and presence of non-ASCII characters

    Additionally, some standard features of the request URL were also considered. All these parts were translated into a set of features, described in details here.

    The above features are translated into varying length representation, which is the actual fingerprint. Depending on report mode, different features are used to fingerprint requests. More information on these modes is presented below. The feature selection process will be described in the forthcoming academic paper.

    Installation

    Minimum requirements needed before installation: * Python >= 3.3, * Tshark >= 2.2.0.

    Installation available from PyPI:

    pip install hfinger

    Hfinger has been tested on Xubuntu 22.04 LTS with tshark package in version 3.6.2, but should work with older versions like 2.6.10 on Xubuntu 18.04 or 3.2.3 on Xubuntu 20.04.

    Please note that as with any PoC, you should run Hfinger in a separated environment, at least with Python virtual environment. Its setup is not covered here, but you can try this tutorial.

    Usage

    After installation, you can call the tool directly from a command line with hfinger or as a Python module with python -m hfinger.

    For example:

    foo@bar:~$ hfinger -f /tmp/test.pcap
    [{"epoch_time": "1614098832.205385000", "ip_src": "127.0.0.1", "ip_dst": "127.0.0.1", "port_src": "53664", "port_dst": "8080", "fingerprint": "2|3|1|php|0.6|PO|1|us-ag,ac,ac-en,ho,co,co-ty,co-le|us-ag:f452d7a9/ac:as-as/ac-en:id/co:Ke-Al/co-ty:te-pl|A|4|1.4"}]

    Help can be displayed with short -h or long --help switches:

    usage: hfinger [-h] (-f FILE | -d DIR) [-o output_path] [-m {0,1,2,3,4}] [-v]
    [-l LOGFILE]

    Hfinger - fingerprinting malware HTTP requests stored in pcap files

    optional arguments:
    -h, --help show this help message and exit
    -f FILE, --file FILE Read a single pcap file
    -d DIR, --directory DIR
    Read pcap files from the directory DIR
    -o output_path, --output-path output_path
    Path to the output directory
    -m {0,1,2,3,4}, --mode {0,1,2,3,4}
    Fingerprint report mode.
    0 - similar number of collisions and fingerprints as mode 2, but using fewer features,
    1 - representation of all designed features, but a little more collisions than modes 0, 2, and 4,
    2 - optimal (the default mode),
    3 - the lowest number of generated fingerprints, but the highest number of collisions,
    4 - the highest fingerprint entropy, but slightly more fingerprints than modes 0-2
    -v, --verbose Report information about non-standard values in the request
    (e.g., non-ASCII characters, no CRLF tags, values not present in the configuration list).
    Without --logfile (-l) will print to the standard error.
    -l LOGFILE, --logfile LOGFILE
    Output logfile in the verbose mode. Implies -v or --verbose switch.

    You must provide a path to a pcap file (-f), or a directory (-d) with pcap files. The output is in JSON format. It will be printed to standard output or to the provided directory (-o) using the name of the source file. For example, output of the command:

    hfinger -f example.pcap -o /tmp/pcap

    will be saved to:

    /tmp/pcap/example.pcap.json

    Report mode -m/--mode can be used to change the default report mode by providing an integer in the range 0-4. The modes differ on represented request features or rounding modes. The default mode (2) was chosen by us to represent all features that are usually used during requests' analysis, but it also offers low number of collisions and generated fingerprints. With other modes, you can achieve different goals. For example, in mode 3 you get a lower number of generated fingerprints but a higher chance of a collision between malware families. If you are unsure, you don't have to change anything. More information on report modes is here.

    Beginning with version 0.2.1 Hfinger is less verbose. You should use -v/--verbose if you want to receive information about encountered non-standard values of headers, non-ASCII characters in the non-payload part of the request, lack of CRLF tags (\r\n\r\n), and other problems with analyzed requests that are not application errors. When any such issues are encountered in the verbose mode, they will be printed to the standard error output. You can also save the log to a defined location using -l/--log switch (it implies -v/--verbose). The log data will be appended to the log file.

    Using hfinger in a Python application

    Beginning with version 0.2.0, Hfinger supports importing to other Python applications. To use it in your app simply import hfinger_analyze function from hfinger.analysis and call it with a path to the pcap file and reporting mode. The returned result is a list of dicts with fingerprinting results.

    For example:

    from hfinger.analysis import hfinger_analyze

    pcap_path = "SPECIFY_PCAP_PATH_HERE"
    reporting_mode = 4
    print(hfinger_analyze(pcap_path, reporting_mode))

    Beginning with version 0.2.1 Hfinger uses logging module for logging information about encountered non-standard values of headers, non-ASCII characters in the non-payload part of the request, lack of CRLF tags (\r\n\r\n), and other problems with analyzed requests that are not application errors. Hfinger creates its own logger using name hfinger, but without prior configuration log information in practice is discarded. If you want to receive this log information, before calling hfinger_analyze, you should configure hfinger logger, set log level to logging.INFO, configure log handler up to your needs, add it to the logger. More information is available in the hfinger_analyze function docstring.

    Fingerprint creation

    A fingerprint is based on features extracted from a request. Usage of particular features from the full list depends on the chosen report mode from a predefined list (more information on report modes is here). The figure below represents the creation of an exemplary fingerprint in the default report mode.

    Three parts of the request are analyzed to extract information: URI, headers' structure (including method and protocol version), and payload. Particular features of the fingerprint are separated using | (pipe). The final fingerprint generated for the POST request from the example is:

    2|3|1|php|0.6|PO|1|us-ag,ac,ac-en,ho,co,co-ty,co-le|us-ag:f452d7a9/ac:as-as/ac-en:id/co:Ke-Al/co-ty:te-pl|A|4|1.4

    The creation of features is described below in the order of appearance in the fingerprint.

    Firstly, URI features are extracted: * URI length represented as a logarithm base 10 of the length, rounded to an integer, (in the example URI is 43 characters long, so log10(43)≈2), * number of directories, (in the example there are 3 directories), * average directory length, represented as a logarithm with base 10 of the actual average length of the directory, rounded to an integer, (in the example there are three directories with total length of 20 characters (6+6+8), so log10(20/3)≈1), * extension of the requested file, but only if it is on a list of known extensions in hfinger/configs/extensions.txt, * average value length represented as a logarithm with base 10 of the actual average value length, rounded to one decimal point, (in the example two values have the same length of 4 characters, what is obviously equal to 4 characters, and log10(4)≈0.6).

    Secondly, header structure features are analyzed: * request method encoded as first two letters of the method (PO), * protocol version encoded as an integer (1 for version 1.1, 0 for version 1.0, and 9 for version 0.9), * order of the headers, * and popular headers and their values.

    To represent order of the headers in the request, each header's name is encoded according to the schema in hfinger/configs/headerslow.json, for example, User-Agent header is encoded as us-ag. Encoded names are separated by ,. If the header name does not start with an upper case letter (or any of its parts when analyzing compound headers such as Accept-Encoding), then encoded representation is prefixed with !. If the header name is not on the list of the known headers, it is hashed using FNV1a hash, and the hash is used as encoding.

    When analyzing popular headers, the request is checked if they appear in it. These headers are: * Connection * Accept-Encoding * Content-Encoding * Cache-Control * TE * Accept-Charset * Content-Type * Accept * Accept-Language * User-Agent

    When the header is found in the request, its value is checked against a table of typical values to create pairs of header_name_representation:value_representation. The name of the header is encoded according to the schema in hfinger/configs/headerslow.json (as presented before), and the value is encoded according to schema stored in hfinger/configs directory or configs.py file, depending on the header. In the above example Accept is encoded as ac and its value */* as as-as (asterisk-asterisk), giving ac:as-as. The pairs are inserted into fingerprint in order of appearance in the request and are delimited using /. If the header value cannot be found in the encoding table, it is hashed using the FNV1a hash.
    If the header value is composed of multiple values, they are tokenized to provide a list of values delimited with ,, for example, Accept: */*, text/* would give ac:as-as,te-as. However, at this point of development, if the header value contains a "quality value" tag (q=), then the whole value is encoded with its FNV1a hash. Finally, values of User-Agent and Accept-Language headers are directly encoded using their FNV1a hashes.

    Finally, in the payload features: * presence of non-ASCII characters, represented with the letter N, and with A otherwise, * payload's Shannon entropy, rounded to an integer, * and payload length, represented as a logarithm with base 10 of the actual payload length, rounded to one decimal point.

    Report modes

    Hfinger operates in five report modes, which differ in features represented in the fingerprint, thus information extracted from requests. These are (with the number used in the tool configuration): * mode 0 - producing a similar number of collisions and fingerprints as mode 2, but using fewer features, * mode 1 - representing all designed features, but producing a little more collisions than modes 0, 2, and 4, * mode 2 - optimal (the default mode), representing all features which are usually used during requests' analysis, but also offering a low number of collisions and generated fingerprints, * mode 3 - producing the lowest number of generated fingerprints from all modes, but achieving the highest number of collisions, * mode 4 - offering the highest fingerprint entropy, but also generating slightly more fingerprints than modes 0-2.

    The modes were chosen in order to optimize Hfinger's capabilities to uniquely identify malware families versus the number of generated fingerprints. Modes 0, 2, and 4 offer a similar number of collisions between malware families, however, mode 4 generates a little more fingerprints than the other two. Mode 2 represents more request features than mode 0 with a comparable number of generated fingerprints and collisions. Mode 1 is the only one representing all designed features, but it increases the number of collisions by almost two times comparing to modes 0, 1, and 4. Mode 3 produces at least two times fewer fingerprints than other modes, but it introduces about nine times more collisions. Description of all designed features is here.

    The modes consist of features (in the order of appearance in the fingerprint): * mode 0: * number of directories, * average directory length represented as an integer, * extension of the requested file, * average value length represented as a float, * order of headers, * popular headers and their values, * payload length represented as a float. * mode 1: * URI length represented as an integer, * number of directories, * average directory length represented as an integer, * extension of the requested file, * variable length represented as an integer, * number of variables, * average value length represented as an integer, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as an integer, * payload length represented as an integer. * mode 2: * URI length represented as an integer, * number of directories, * average directory length represented as an integer, * extension of the requested file, * average value length represented as a float, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as an integer, * payload length represented as a float. * mode 3: * URI length represented as an integer, * average directory length represented as an integer, * extension of the requested file, * average value length represented as an integer, * order of headers. * mode 4: * URI length represented as a float, * number of directories, * average directory length represented as a float, * extension of the requested file, * variable length represented as a float, * average value length represented as a float, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as a float, * payload length represented as a float.



    A Close Up Look at the Consumer Data Broker Radaris

    If you live in the United States, the data broker Radaris likely knows a great deal about you, and they are happy to sell what they know to anyone. But how much do we know about Radaris? Publicly available data indicates that in addition to running a dizzying array of people-search websites, the co-founders of Radaris operate multiple Russian-language dating services and affiliate programs. It also appears many of their businesses have ties to a California marketing firm that works with a Russian state-run media conglomerate currently sanctioned by the U.S. government.

    Formed in 2009, Radaris is a vast people-search network for finding data on individuals, properties, phone numbers, businesses and addresses. Search for any American’s name in Google and the chances are excellent that a listing for them at Radaris.com will show up prominently in the results.

    Radaris reports typically bundle a substantial amount of data scraped from public and court documents, including any current or previous addresses and phone numbers, known email addresses and registered domain names. The reports also list address and phone records for the target’s known relatives and associates. Such information could be useful if you were trying to determine the maiden name of someone’s mother, or successfully answer a range of other knowledge-based authentication questions.

    Currently, consumer reports advertised for sale at Radaris.com are being fulfilled by a different people-search company called TruthFinder. But Radaris also operates a number of other people-search properties — like Centeda.com — that sell consumer reports directly and behave almost identically to TruthFinder: That is, reel the visitor in with promises of detailed background reports on people, and then charge a $34.99 monthly subscription fee just to view the results.

    The Better Business Bureau (BBB) assigns Radaris a rating of “F” for consistently ignoring consumers seeking to have their information removed from Radaris’ various online properties. Of the 159 complaints detailed there in the last year, several were from people who had used third-party identity protection services to have their information removed from Radaris, only to receive a notice a few months later that their Radaris record had been restored.

    What’s more, Radaris’ automated process for requesting the removal of your information requires signing up for an account, potentially providing more information about yourself that the company didn’t already have (see screenshot above).

    Radaris has not responded to requests for comment.

    Radaris, TruthFinder and others like them all force users to agree that their reports will not be used to evaluate someone’s eligibility for credit, or a new apartment or job. This language is so prominent in people-search reports because selling reports for those purposes would classify these firms as consumer reporting agencies (CRAs) and expose them to regulations under the Fair Credit Reporting Act (FCRA).

    These data brokers do not want to be treated as CRAs, and for this reason their people search reports typically do not include detailed credit histories, financial information, or full Social Security Numbers (Radaris reports include the first six digits of one’s SSN).

    But in September 2023, the U.S. Federal Trade Commission found that TruthFinder and another people-search service Instant Checkmate were trying to have it both ways. The FTC levied a $5.8 million penalty against the companies for allegedly acting as CRAs because they assembled and compiled information on consumers into background reports that were marketed and sold for employment and tenant screening purposes.

    An excerpt from the FTC’s complaint against TruthFinder and Instant Checkmate.

    The FTC also found TruthFinder and Instant Checkmate deceived users about background report accuracy. The FTC alleges these companies made millions from their monthly subscriptions using push notifications and marketing emails that claimed that the subject of a background report had a criminal or arrest record, when the record was merely a traffic ticket.

    “All the while, the companies touted the accuracy of their reports in online ads and other promotional materials, claiming that their reports contain “the MOST ACCURATE information available to the public,” the FTC noted. The FTC says, however, that all the information used in their background reports is obtained from third parties that expressly disclaim that the information is accurate, and that TruthFinder and Instant Checkmate take no steps to verify the accuracy of the information.

    The FTC said both companies deceived customers by providing “Remove” and “Flag as Inaccurate” buttons that did not work as advertised. Rather, the “Remove” button removed the disputed information only from the report as displayed to that customer; however, the same item of information remained visible to other customers who searched for the same person.

    The FTC also said that when a customer flagged an item in the background report as inaccurate, the companies never took any steps to investigate those claims, to modify the reports, or to flag to other customers that the information had been disputed.

    WHO IS RADARIS?

    According to Radaris’ profile at the investor website Pitchbook.com, the company’s founder and “co-chief executive officer” is a Massachusetts resident named Gary Norden, also known as Gary Nard.

    An analysis of email addresses known to have been used by Mr. Norden shows he is a native Russian man whose real name is Igor Lybarsky (also spelled Lubarsky). Igor’s brother Dmitry, who goes by “Dan,” appears to be the other co-CEO of Radaris. Dmitry Lybarsky’s Facebook/Meta account says he was born in March 1963.

    The Lybarsky brothers Dmitry or “Dan” (left) and Igor a.k.a. “Gary,” in an undated photo.

    Indirectly or directly, the Lybarskys own multiple properties in both Sherborn and Wellesley, Mass. However, the Radaris website is operated by an offshore entity called Bitseller Expert Ltd, which is incorporated in Cyprus. Neither Lybarsky brother responded to requests for comment.

    A review of the domain names registered by Gary Norden shows that beginning in the early 2000s, he and Dan built an e-commerce empire by marketing prepaid calling cards and VOIP services to Russian expatriates who are living in the United States and seeking an affordable way to stay in touch with loved ones back home.

    A Sherborn, Mass. property owned by Barsky Real Estate Trust and Dmitry Lybarsky.

    In 2012, the main company in charge of providing those calling services — Wellesley Hills, Mass-based Unipoint Technology Inc. — was fined $179,000 by the U.S. Federal Communications Commission, which said Unipoint never applied for a license to provide international telecommunications services.

    DomainTools.com shows the email address gnard@unipointtech.com is tied to 137 domains, including radaris.com. DomainTools also shows that the email addresses used by Gary Norden for more than two decades — epop@comby.com, gary@barksy.com and gary1@eprofit.com, among others — appear in WHOIS registration records for an entire fleet of people-search websites, including: centeda.com, virtory.com, clubset.com, kworld.com, newenglandfacts.com, and pub360.com.

    Still more people-search platforms tied to Gary Norden– like publicreports.com and arrestfacts.com — currently funnel interested customers to third-party search companies, such as TruthFinder and PersonTrust.com.

    The email addresses used by Gary Nard/Gary Norden are also connected to a slew of data broker websites that sell reports on businesses, real estate holdings, and professionals, including bizstanding.com, homemetry.com, trustoria.com, homeflock.com, rehold.com, difive.com and projectlab.com.

    AFFILIATE & ADULT

    Domain records indicate that Gary and Dan for many years operated a now-defunct pay-per-click affiliate advertising network called affiliate.ru. That entity used domain name servers tied to the aforementioned domains comby.com and eprofit.com, as did radaris.ru.

    A machine-translated version of Affiliate.ru, a Russian-language site that advertised hundreds of money making affiliate programs, including the Comfi.com prepaid calling card affiliate.

    Comby.com used to be a Russian language social media network that looked a great deal like Facebook. The domain now forwards visitors to Privet.ru (“hello” in Russian), a dating site that claims to have 5 million users. Privet.ru says it belongs to a company called Dating Factory, which lists offices in Switzerland. Privet.ru uses the Gary Norden domain eprofit.com for its domain name servers.

    Dating Factory’s website says it sells “powerful dating technology” to help customers create unique or niche dating websites. A review of the sample images available on the Dating Factory homepage suggests the term “dating” in this context refers to adult websites. Dating Factory also operates a community called FacebookOfSex, as well as the domain analslappers.com.

    RUSSIAN AMERICA

    Email addresses for the Comby and Eprofit domains indicate Gary Norden operates an entity in Wellesley Hills, Mass. called RussianAmerican Holding Inc. (russianamerica.com). This organization is listed as the owner of the domain newyork.ru, which is a site dedicated to orienting newcomers from Russia to the Big Apple.

    Newyork.ru’s terms of service refer to an international calling card company called ComFi Inc. (comfi.com) and list an address as PO Box 81362 Wellesley Hills, Ma. Other sites that include this address are russianamerica.com, russianboston.com, russianchicago.com, russianla.com, russiansanfran.com, russianmiami.com, russiancleveland.com and russianseattle.com (currently offline).

    ComFi is tied to Comfibook.com, which was a search aggregator website that collected and published data from many online and offline sources, including phone directories, social networks, online photo albums, and public records.

    The current website for russianamerica.com. Note the ad in the bottom left corner of this image for Channel One, a Russian state-owned media firm that is currently sanctioned by the U.S. government.

    AMERICAN RUSSIAN MEDIA

    Many of the U.S. city-specific online properties apparently tied to Gary Norden include phone numbers on their contact pages for a pair of Russian media and advertising firms based in southern California. The phone number 323-874-8211 appears on the websites russianla.com, russiasanfran.com, and rosconcert.com, which sells tickets to theater events performed in Russian.

    Historic domain registration records from DomainTools show rosconcert.com was registered in 2003 to Unipoint Technologies — the same company fined by the FCC for not having a license. Rosconcert.com also lists the phone number 818-377-2101.

    A phone number just a few digits away — 323-874-8205 — appears as a point of contact on newyork.ru, russianmiami.com, russiancleveland.com, and russianchicago.com. A search in Google shows this 82xx number range — and the 818-377-2101 number — belong to two different entities at the same UPS Store mailbox in Tarzana, Calif: American Russian Media Inc. (armediacorp.com), and Lamedia.biz.

    Armediacorp.com is the home of FACT Magazine, a glossy Russian-language publication put out jointly by the American-Russian Business Council, the Hollywood Chamber of Commerce, and the West Hollywood Chamber of Commerce.

    Lamedia.biz says it is an international media organization with more than 25 years of experience within the Russian-speaking community on the West Coast. The site advertises FACT Magazine and the Russian state-owned media outlet Channel One. Clicking the Channel One link on the homepage shows Lamedia.biz offers to submit advertising spots that can be shown to Channel One viewers. The price for a basic ad is listed at $500.

    In May 2022, the U.S. government levied financial sanctions against Channel One that bar US companies or citizens from doing business with the company.

    The website of lamedia.biz offers to sell advertising on two Russian state-owned media firms currently sanctioned by the U.S. government.

    LEGAL ACTIONS AGAINST RADARIS

    In 2014, a group of people sued Radaris in a class-action lawsuit claiming the company’s practices violated the Fair Credit Reporting Act. Court records indicate the defendants never showed up in court to dispute the claims, and as a result the judge eventually awarded the plaintiffs a default judgement and ordered the company to pay $7.5 million.

    But the plaintiffs in that civil case had a difficult time collecting on the court’s ruling. In response, the court ordered the radaris.com domain name (~9.4M monthly visitors) to be handed over to the plaintiffs.

    However, in 2018 Radaris was able to reclaim their domain on a technicality. Attorneys for the company argued that their clients were never named as defendants in the original lawsuit, and so their domain could not legally be taken away from them in a civil judgment.

    “Because our clients were never named as parties to the litigation, and were never served in the litigation, the taking of their property without due process is a violation of their rights,” Radaris’ attorneys argued.

    In October 2023, an Illinois resident filed a class-action lawsuit against Radaris for allegedly using people’s names for commercial purposes, in violation of the Illinois Right of Publicity Act.

    On Feb. 8, 2024, a company called Atlas Data Privacy Corp. sued Radaris LLC for allegedly violating “Daniel’s Law,” a statute that allows New Jersey law enforcement, government personnel, judges and their families to have their information completely removed from people-search services and commercial data brokers. Atlas has filed at least 140 similar Daniel’s Law complaints against data brokers recently.

    Daniel’s Law was enacted in response to the death of 20-year-old Daniel Anderl, who was killed in a violent attack targeting a federal judge (his mother). In July 2020, a disgruntled attorney who had appeared before U.S. District Judge Esther Salas disguised himself as a Fedex driver, went to her home and shot and killed her son (the judge was unharmed and the assailant killed himself).

    Earlier this month, The Record reported on Atlas Data Privacy’s lawsuit against LexisNexis Risk Data Management, in which the plaintiffs representing thousands of law enforcement personnel in New Jersey alleged that after they asked for their information to remain private, the data broker retaliated against them by freezing their credit and falsely reporting them as identity theft victims.

    Another data broker sued by Atlas Data Privacy — pogodata.com — announced on Mar. 1 that it was likely shutting down because of the lawsuit.

    “The matter is far from resolved but your response motivates us to try to bring back most of the names while preserving redaction of the 17,000 or so clients of the redaction company,” the company wrote. “While little consolation, we are not alone in the suit – the privacy company sued 140 property-data sites at the same time as PogoData.”

    Atlas says their goal is convince more states to pass similar laws, and to extend those protections to other groups such as teachers, healthcare personnel and social workers. Meanwhile, media law experts say they’re concerned that enacting Daniel’s Law in other states would limit the ability of journalists to hold public officials accountable, and allow authorities to pursue criminals charges against media outlets that publish the same type of public and governments records that fuel the people-search industry.

    PEOPLE-SEARCH CARVE-OUTS

    There are some pending changes to the US legal and regulatory landscape that could soon reshape large swaths of the data broker industry. But experts say it is unlikely that any of these changes will affect people-search companies like Radaris.

    On Feb. 28, 2024, the White House issued an executive order that directs the U.S. Department of Justice (DOJ) to create regulations that would prevent data brokers from selling or transferring abroad certain data types deemed too sensitive, including genomic and biometric data, geolocation and financial data, as well as other as-yet unspecified personal identifiers. The DOJ this week published a list of more than 100 questions it is seeking answers to regarding the data broker industry.

    In August 2023, the Consumer Financial Protection Bureau (CFPB) announced it was undertaking new rulemaking related to data brokers.

    Justin Sherman, an adjunct professor at Duke University, said neither the CFPB nor White House rulemaking will likely address people-search brokers because these companies typically get their information by scouring federal, state and local government records. Those government files include voting registries, property filings, marriage certificates, motor vehicle records, criminal records, court documents, death records, professional licenses, bankruptcy filings, and more.

    “These dossiers contain everything from individuals’ names, addresses, and family information to data about finances, criminal justice system history, and home and vehicle purchases,” Sherman wrote in an October 2023 article for Lawfare. “People search websites’ business pitch boils down to the fact that they have done the work of compiling data, digitizing it, and linking it to specific people so that it can be searched online.”

    Sherman said while there are ongoing debates about whether people search data brokers have legal responsibilities to the people about whom they gather and sell data, the sources of this information — public records — are completely carved out from every single state consumer privacy law.

    “Consumer privacy laws in California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Utah, and Virginia all contain highly similar or completely identical carve-outs for ‘publicly available information’ or government records,” Sherman wrote. “Tennessee’s consumer data privacy law, for example, stipulates that “personal information,” a cornerstone of the legislation, does not include ‘publicly available information,’ defined as:

    “…information that is lawfully made available through federal, state, or local government records, or information that a business has a reasonable basis to believe is lawfully made available to the general public through widely distributed media, by the consumer, or by a person to whom the consumer has disclosed the information, unless the consumer has restricted the information to a specific audience.”

    Sherman said this is the same language as the carve-out in the California privacy regime, which is often held up as the national leader in state privacy regulations. He said with a limited set of exceptions for survivors of stalking and domestic violence, even under California’s newly passed Delete Act — which creates a centralized mechanism for consumers to ask some third-party data brokers to delete their information — consumers across the board cannot exercise these rights when it comes to data scraped from property filings, marriage certificates, and public court documents, for example.

    “With some very narrow exceptions, it’s either extremely difficult or impossible to compel these companies to remove your information from their sites,” Sherman told KrebsOnSecurity. “Even in states like California, every single consumer privacy law in the country completely exempts publicly available information.”

    Below is a mind map that helped KrebsOnSecurity track relationships between and among the various organizations named in the story above:

    A mind map of various entities apparently tied to Radaris and the company’s co-founders. Click to enlarge.

    PurpleOps - An Open-Source Self-Hosted Purple Team Management Web Application

    By: Zion3R


    An open-source self-hosted purple team management web application.


    Key Features

    • Template engagements and testcases
    • Framework friendly
    • Role-based Access Control & MFA
    • Inbuilt DOCX reporting + custom template support

    How PurpleOps is different:

    • No attribution needed
    • Hackable, no "no-reversing" clauses
    • No over complications with tomcat, redis, manual database transplanting and an obtuce permission model

    Installation

    mongodb -d -p 27017:27017 mongo $ pip3 install -r requirements.txt $ python3 seeder.py $ python3 purpleops.py" dir="auto">
    # Clone this repository
    $ git clone https://github.com/CyberCX-STA/PurpleOps

    # Go into the repository
    $ cd PurpleOps

    # Alter PurpleOps settings (if you want to customize anything but should work out the box)
    $ nano .env

    # Run the app with docker
    $ sudo docker compose up

    # PurpleOps should now by available on http://localhost:5000, it is recommended to add a reverse proxy such as nginx or Apache in front of it if you want to expose this to the outside world.

    # Alternatively
    $ sudo docker run --name mongodb -d -p 27017:27017 mongo
    $ pip3 install -r requirements.txt
    $ python3 seeder.py
    $ python3 purpleops.py

    Contact Us

    We would love to hear back from you, if something is broken or have and idea to make it better add a ticket or ping us pops@purpleops.app | @_w_m__

    Credits



    Nuclearpond - A Utility Leveraging Nuclei To Perform Internet Wide Scans For The Cost Of A Cup Of Coffee


    Nuclear Pond is used to leverage Nuclei in the cloud with unremarkable speed, flexibility, and perform internet wide scans for far less than a cup of coffee.

    It leverages AWS Lambda as a backend to invoke Nuclei scans in parallel, choice of storing json findings in s3 to query with AWS Athena, and is easily one of the cheapest ways you can execute scans in the cloud.


    Features

    • Output results to your terminal, json, or to S3
    • Specify threads and parallel invocations in any desired number of batches
    • Specify any Nuclei arguments just like you would locally
    • Specify a single host or from a file
    • Run the http server to take scans from the API
    • Run the http server to get status of the scans
    • Query findings through Athena for searching S3
    • Specify a custom nuclei and reporting configurations

    Usage

    Think of Nuclear Pond as just a way for you to run Nuclei in the cloud. You can use it just as you would on your local machine but run them in parallel and with however many hosts you want to specify. All you need to think of is the nuclei command line flags you wish to pass to it.

    Setup & Installation

    To install Nuclear Pond, you need to configure the backend terraform module. You can do this by running terraform apply or by leveraging terragrunt.

    $ go install github.com/DevSecOpsDocs/nuclearpond@latest

    Environment Variables

    You can either pass in your backend with flags or through environment variables. You can use -f or --function-name to specify your Lambda function and -r or --region to the specified region. Below are environment variables you can use.

    • AWS_LAMBDA_FUNCTION_NAME is the name of your lambda function to execute the scans on
    • AWS_REGION is the region your resources are deployed
    • NUCLEARPOND_API_KEY is the API key for authenticating to the API
    • AWS_DYNAMODB_TABLE is the dynamodb table to store API scan states

    Command line flags

    Below are some of the flags you can specify when running nuclearpond. The primary flags you need are -t or -l for your target(s), -a for the nuclei args, and -o to specify your output. When specifying Nuclei args you must pass them in as base64 encoded strings by performing -a $(echo -ne "-t dns" | base64).

    Commands

    Below are the subcommands you can execute within nuclearpond.

    • run: Execute nuclei scans
    • service: Basic API to execute nuclei scans

    Run

    To run nuclearpond subcommand nuclearpond run -t devsecopsdocs.com -r us-east-1 -f jwalker-nuclei-runner-function -a $(echo -ne "-t dns" | base64) -o cmd -b 1 in which the target is devsecopsdocs.com, region is us-east-1, lambda function name is jwalker-nuclei-runner-function, nuclei arguments are -t dns, output is cmd, and executes one function through a batch of one host through -b 1.

    $ nuclearpond run -h
    Executes nuclei tasks in parallel by invoking lambda asynchronously

    Usage:
    nuclearpond run [flags]

    Flags:
    -a, --args string nuclei arguments as base64 encoded string
    -b, --batch-size int batch size for number of targets per execution (default 1)
    -f, --function-name string AWS Lambda function name
    -h, --help help for run
    -o, --output string output type to save nuclei results(s3, cmd, or json) (default "cmd")
    -r, --region string AWS region to run nuclei
    -s, --silent silent command line output
    -t, --target string individual target to specify
    -l, --targets string list of targets in a file
    -c, --threads int number of threads to run lambda funct ions, default is 1 which will be slow (default 1)

    Custom Templates

    The terraform module by default downloads the templates on execution as well as adds the templates as a layer. The variables to download templates use the terraform github provider to download the release zip. The folder name within the zip will be located within /opt. Since Nuclei downloads them on run we do not have to but to improve performance you can specify -t /opt/nuclei-templates-9.3.4/dns to execute templates from the downloaded zip. To specify your own templates you must reference a release. When doing so on your own repository you must specify these variables in the terraf orm module, github_token is not required if your repository is public.

    • github_repository
    • github_owner
    • release_tag
    • github_token

    Retrieving Findings

    If you have specified s3 as the output, your findings will be located in S3. The fastest way to get at them is to do so with Athena. Assuming you setup the terraform-module as your backend, all you need to do is query them directly through athena. You may have to configure query results if you have not done so already.

    select
    *
    from
    nuclei_db.findings_db
    limit 10;

    Advance Query

    In order to get down into queries a little deeper, I thought I would give you a quick example. In the select statement we drill down into info column, "matched-at" column must be in double quotes due to - character, and you are searching only for high and critical findings generated by Nuclei.

    SELECT
    info.name,
    host,
    type,
    info.severity,
    "matched-at",
    info.description,
    template,
    dt
    FROM
    "nuclei_db"."findings_db"
    where
    host like '%devsecopsdocs.com'
    and info.severity in ('high','critical')

    Infrastructure

    The backend infrastructure, all within terraform module. I would strongly recommend reading the readme associated to it as it will have some important notes.

    • Lambda function
    • S3 bucket
      • Stores nuclei binary
      • Stores configuration files
      • Stores findings
    • Glue Database and Table
      • Allows you to query the findings in S3
      • Partitioned by the hour
      • Partition projection
    • IAM Role for Lambda Function


    EAST - Extensible Azure Security Tool - Documentation


    Extensible Azure Security Tool (Later referred as E.A.S.T) is tool for assessing Azure and to some extent Azure AD security controls. Primary use case of EAST is Security data collection for evaluation in Azure Assessments. This information (JSON content) can then be used in various reporting tools, which we use to further correlate and investigate the data.


    This tool is licensed under MIT license.




    Collaborators

    Release notes

    • Preview branch introduced

      Changes:

      • Installation now accounts for use of Azure Cloud Shell's updated version in regards to depedencies (Cloud Shell has now Node.JS v 16 version installed)

      • Checking of Databricks cluster types as per advisory

        • Audits Databricks clusters for potential privilege elevation - This control requires typically permissions on the databricks cluster"
      • Content.json is has now key and content based sorting. This enables doing delta checks with git diff HEAD^1 ¹ as content.json has predetermined order of results

      ¹Word of caution, if want to check deltas of content.json, then content.json will need to be "unignored" from .gitignore exposing results to any upstream you might have configured.

      Use this feature with caution, and ensure you don't have public upstream set for the branch you are using this feature for

    • Change of programming patterns to avoid possible race conditions with larger datasets. This is mostly changes of using var to let in for await -style loops


    Important

    Current status of the tool is beta
    • Fixes, updates etc. are done on "Best effort" basis, with no guarantee of time, or quality of the possible fix applied
    • We do some additional tuning before using EAST in our daily work, such as apply various run and environment restrictions, besides formalizing ourselves with the environment in question. Thus we currently recommend, that EAST is run in only in test environments, and with read-only permissions.
      • All the calls in the service are largely to Azure Cloud IP's, so it should work well in hardened environments where outbound IP restrictions are applied. This reduces the risk of this tool containing malicious packages which could "phone home" without also having C2 in Azure.
        • Essentially running it in read-only mode, reduces a lot of the risk associated with possibly compromised NPM packages (Google compromised NPM)
        • Bugs etc: You can protect your environment against certain mistakes in this code by running the tool with reader-only permissions
    • Lot of the code is "AS IS": Meaning, it's been serving only the purpose of creating certain result; Lot of cleaning up and modularizing remains to be finished
    • There are no tests at the moment, apart from certain manual checks, that are run after changes to main.js and various more advanced controls.
    • The control descriptions at this stage are not the final product, so giving feedback on them, while appreciated, is not the focus of the tooling at this stage
    • As the name implies, we use it as tool to evaluate environments. It is not meant to be run as unmonitored for the time being, and should not be run in any internet exposed service that accepts incoming connections.
    • Documentation could be described as incomplete for the time being
    • EAST is mostly focused on PaaS resource, as most of our Azure assessments focus on this resource type
    • No Input sanitization is performed on launch params, as it is always assumed, that the input of these parameters are controlled. That being said, the tool uses extensively exec() - While I have not reviewed all paths, I believe that achieving shellcode execution is trivial. This tool does not assume hostile input, thus the recommendation is that you don't paste launch arguments into command line without reviewing them first.

    Tool operation

    Depedencies

    To reduce amount of code we use the following depedencies for operation and aesthetics are used (Kudos to the maintainers of these fantastic packages)

    package aesthetics operation license
    axios
    MIT
    yargs
    MIT
    jsonwebtoken
    MIT
    chalk
    MIT
    js-beautify
    MIT

    Other depedencies for running the tool: If you are planning to run this in Azure Cloud Shell you don't need to install Azure CLI:

    • This tool does not include or distribute Microsoft Azure CLI, but rather uses it when it has been installed on the source system (Such as Azure Cloud Shell, which is primary platform for running EAST)

    Azure Cloud Shell (BASH) or applicable Linux Distro / WSL

    Requirement description Install
    AZ CLI
    AZCLI USE curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
    Node.js runtime 14
    Node.js runtime for EAST install with NVM

    Controls

    EAST provides three categories of controls: Basic, Advanced, and Composite

    The machine readable control looks like this, regardless of the type (Basic/advanced/composite):

    {
    "name": "fn-sql-2079",
    "resource": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourcegroups/rg-fn-2079/providers/microsoft.web/sites/fn-sql-2079",
    "controlId": "managedIdentity",
    "isHealthy": true,
    "id": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourcegroups/rg-fn-2079/providers/microsoft.web/sites/fn-sql-2079",
    "Description": "\r\n Ensure The Service calls downstream resources with managed identity",
    "metadata": {
    "principalId": {
    "type": "SystemAssigned",
    "tenantId": "033794f5-7c9d-4e98-923d-7b49114b7ac3",
    "principalId": "cb073f1e-03bc-440e-874d-5ed3ce6df7f8"
    },
    "roles": [{
    "role": [{
    "properties": {
    "roleDefinitionId": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c",
    "principalId": "cb073f1e-03b c-440e-874d-5ed3ce6df7f8",
    "scope": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourceGroups/RG-FN-2079",
    "createdOn": "2021-12-27T06:03:09.7052113Z",
    "updatedOn": "2021-12-27T06:03:09.7052113Z",
    "createdBy": "4257db31-3f22-4c0f-bd57-26cbbd4f5851",
    "updatedBy": "4257db31-3f22-4c0f-bd57-26cbbd4f5851"
    },
    "id": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourceGroups/RG-FN-2079/providers/Microsoft.Authorization/roleAssignments/ada69f21-790e-4386-9f47-c9b8a8c15674",
    "type": "Microsoft.Authorization/roleAssignments",
    "name": "ada69f21-790e-4386-9f47-c9b8a8c15674",
    "RoleName": "Contributor"
    }]
    }]
    },
    "category": "Access"
    },

    Basic

    Basic controls include checks on the initial ARM object for simple "toggle on/off"- boolean settings of said service.

    Example: Azure Container Registry adminUser

    acr_adminUser


    Portal EAST

    if (item.properties?.adminUserEnabled == false ){returnObject.isHealthy = true }

    Advanced

    Advanced controls include checks beyond the initial ARM object. Often invoking new requests to get further information about the resource in scope and it's relation to other services.

    Example: Role Assignments

    Besides checking the role assignments of subscription, additional check is performed via Azure AD Conditional Access Reporting for MFA, and that privileged accounts are not only protected by passwords (SPN's with client secrets)

    Example: Azure Data Factory

    ADF_pipeLineRuns

    Azure Data Factory pipeline mapping combines pipelines -> activities -> and data targets together and then checks for secrets leaked on the logs via run history of the said activities.



    Composite

    Composite controls combines two or more control results from pipeline, in order to form one, or more new controls. Using composites solves two use cases for EAST

    1. You cant guarantee an order of control results being returned in the pipeline
    2. You need to return more than one control result from single check

    Example: composite_resolve_alerts

    1. Get alerts from Microsoft Cloud Defender on subscription check
    2. Form new controls per resourceProvider for alerts

    Reporting

    EAST is not focused to provide automated report generation, as it provides mostly JSON files with control and evaluation status. The idea is to use separate tooling to create reports, which are fairly trivial to automate via markdown creation scripts and tools such as Pandoc

    • While focus is not on the reporting, this repo includes example automation for report creation with pandoc to ease reading of the results in single document format.

    While this tool does not distribute pandoc, it can be used when creation of the reports, thus the following citation is added: https://github.com/jgm/pandoc/blob/master/CITATION.cff

    cff-version: 1.2.0
    title: Pandoc
    message: "If you use this software, please cite it as below."
    type: software
    url: "https://github.com/jgm/pandoc"
    authors:
    - given-names: John
    family-names: MacFarlane
    email: jgm@berkeley.edu
    orcid: 'https://orcid.org/0000-0003-2557-9090'
    - given-names: Albert
    family-names: Krewinkel
    email: tarleb+github@moltkeplatz.de
    orcid: '0000-0002-9455-0796'
    - given-names: Jesse
    family-names: Rosenthal
    email: jrosenthal@jhu.edu

    Running EAST scan

    This part has guide how to run this either on BASH@linux, or BASH on Azure Cloud Shell (obviously Cloud Shell is Linux too, but does not require that you have your own linux box to use this)

    ⚠️If you are running the tool in Cloud Shell, you might need to reapply some of the installations again as Cloud Shell does not persist various session settings.

    Fire and forget prerequisites on cloud shell

    curl -o- https://raw.githubusercontent.com/jsa2/EAST/preview/sh/initForuse.sh | bash;

    jump to next step

    Detailed Prerequisites (This is if you opted no to do the "fire and forget version")

    Prerequisites

    git clone https://github.com/jsa2/EAST --branch preview
    cd EAST;
    npm install

    Pandoc installation on cloud shell

    # Get pandoc for reporting (first time only)
    wget "https://github.com/jgm/pandoc/releases/download/2.17.1.1/pandoc-2.17.1.1-linux-amd64.tar.gz";
    tar xvzf "pandoc-2.17.1.1-linux-amd64.tar.gz" --strip-components 1 -C ~

    Installing pandoc on distros that support APT

    # Get pandoc for reporting (first time only)
    sudo apt install pandoc

    Login Az CLI and run the scan

    # Relogin is required to ensure token cache is placed on session on cloud shell

    az account clear
    az login

    #
    cd EAST
    # replace the subid below with your subscription ID!
    subId=6193053b-408b-44d0-b20f-4e29b9b67394
    #
    node ./plugins/main.js --batch=10 --nativescope=true --roleAssignments=true --helperTexts=true --checkAad=true --scanAuditLogs --composites --subInclude=$subId


    Generate report

    cd EAST; node templatehelpers/eastReports.js --doc

    • If you want to include all Azure Security Benchmark results in the report

    cd EAST; node templatehelpers/eastReports.js --doc --asb

    Export report from cloud shell

    pandoc -s fullReport2.md -f markdown -t docx --reference-doc=pandoc-template.docx -o fullReport2.docx


    Azure Devops (Experimental) There is Azure Devops control for dumping pipeline logs. You can specify the control run by following example:

    node ./plugins/main.js --batch=10 --nativescope=true --roleAssignments=true --helperTexts=true --checkAad=true --scanAuditLogs --composites --subInclude=$subId --azdevops "organizationName"

    Licensing

    Community use

    • Share relevant controls across multiple environments as community effort

    Company use

    • Companies have possibility to develop company specific controls which apply to company specific work. Companies can then control these implementations by decision to share, or not share them based on the operating principle of that company.

    Non IPR components

    • Code logic and functions are under MIT license. since code logic and functions are alredy based on open-source components & vendor API's, it does not make sense to restrict something that is already based on open source

    If you use this tool as part of your commercial effort we only require, that you follow the very relaxed terms of MIT license

    Read license

    Tool operation documentation

    Principles

    AZCLI USE

    Existing tooling enhanced with Node.js runtime

    Use rich and maintained context of Microsoft Azure CLI login & commands with Node.js control flow which supplies enhanced rest-requests and maps results to schema.

    • This tool does not include or distribute Microsoft Azure CLI, but rather uses it when it has been installed on the source system (Such as Azure Cloud Shell, which is primary platform for running EAST)

    Speedup

    View more details

    ✅Using Node.js runtime as orchestrator utilises Nodes asynchronous nature allowing batching of requests. Batching of requests utilizes the full extent of Azure Resource Managers incredible speed.

    ✅Compared to running requests one-by-one, the speedup can be up to 10x, when Node executes the batch of requests instead of single request at time

    Parameters reference

    Example:

    node ./plugins/main.js --batch=10 --nativescope --roleAssignments --helperTexts=true --checkAad --scanAuditLogs --composites --shuffle --clearTokens
    Param Description Default if undefined
    --nativescope Currently mandatory parameter no values
    --shuffle Can help with throttling. Shuffles the resource list to reduce the possibility of resource provider throttling threshold being met no values
    --roleAssignments Checks controls as per microsoft.authorization no values
    --includeRG Checks controls with ResourceGroups as per microsoft.authorization no values
    --checkAad Checks controls as per microsoft.azureactivedirectory no values
    --subInclude Defines subscription scope no default, requires subscriptionID/s, if not defined will enumerate all subscriptions the user have access to
    --namespace text filter which matches full, or part of the resource ID
    example /microsoft.storage/storageaccounts all storage accounts in the scope
    optional parameter
    --notIncludes text filter which matches full, or part of the resource ID
    example /microsoft.storage/storageaccounts all storage accounts in the scope are excluded
    optional parameter
    --batch size of batch interval between throttles 5
    --wait size of batch interval between throttles 1500
    --scanAuditLogs optional parameter. When defined in hours will toggle Azure Activity Log scanning for weak authentication events
    defined in: scanAuditLogs
    24h
    --composites read composite no values
    --clearTokens clears tokens in session folder, use this if you get authorization errors, or have just changed to other az login account
    use az account clear if you want to clear AZ CLI cache too
    no values
    --tag Filter all results in the end based on single tag--tag=svc=aksdev no values
    --ignorePreCheck use this option when used with browser delegated tokens no values
    --helperTexts Will append text descriptions from general to manual controls no values
    --reprocess Will update results to existing content.json. Useful for incremental runs no values

    Parameters reference for example report:

    node templatehelpers/eastReports.js --asb 
    Param Description Default if undefined
    --asb gets all ASB results available to users no values
    --policy gets all Policy results available to users no values
    --doc prints pandoc string for export to console no values

    (Highly experimental) Running in restricted environments where only browser use is available

    Read here Running in restricted environments

    Developing controls

    Developer guide including control flow description is here dev-guide.md

    Updates and examples

    Auditing Microsoft.Web provider (Functions and web apps)

    ✅Check roles that are assigned to function managed identity in Azure AD and all Azure Subscriptions the audit account has access to
    ✅Relation mapping, check which keyVaults the function uses across all subs the audit account has access to
    ✅Check if Azure AD authentication is enabled
    ✅Check that generation of access tokens to the api requires assigment .appRoleAssignmentRequired
    ✅Audit bindings
    • Function or Azure AD Authentication enabled
    • Count and type of triggers

    ✅Check if SCM and FTP endpoints are secured


    Azure RBAC baseline authorization

    ⚠️Detect principals in privileged subscriptions roles protected only by password-based single factor authentication.
    • Checks for users without MFA policies applied for set of conditions
    • Checks for ServicePrincipals protected only by password (as opposed to using Certificate Credential, workload federation and or workload identity CA policy)

    Maps to App Registration Best Practices

    • An unused credential on an application can result in security breach. While it's convenient to use password. secrets as a credential, we strongly recommend that you use x509 certificates as the only credential type for getting tokens for your application

    ✅State healthy - User result example

    { 
    "subscriptionName": "EAST -msdn",
    "friendlyName": "joosua@thx138.onmicrosoft.com",
    "mfaResults": {
    "oid": "138ac68f-d8a7-4000-8d41-c10ff26a9097",
    "appliedPol": [{
    "GrantConditions": "challengeWithMfa",
    "policy": "baseline",
    "oid": "138ac68f-d8a7-4000-8d41-c10ff26a9097"
    }],
    "checkType": "mfa"
    },
    "basicAuthResults": {
    "oid": "138ac68f-d8a7-4000-8d41-c10aa26a9097",
    "appliedPol": [{
    "GrantConditions": "challengeWithMfa",
    "policy": "baseline",
    "oid": "138ac68f-d8a7-4000-8d41-c10aa26a9097"
    }],
    "checkType": "basicAuth"
    },
    }

    ⚠️State unHealthy - Application principal example

    { 
    "subscriptionName": "EAST - HoneyPot",
    "friendlyName": "thx138-kvref-6193053b-408b-44d0-b20f-4e29b9b67394",
    "creds": {
    "@odata.context": "https://graph.microsoft.com/beta/$metadata#servicePrincipals(id,displayName,appId,keyCredentials,passwordCredentials,servicePrincipalType)/$entity",
    "id": "babec804-037d-4caf-946e-7a2b6de3a45f",
    "displayName": "thx138-kvref-6193053b-408b-44d0-b20f-4e29b9b67394",
    "appId": "5af1760e-89ff-46e4-a968-0ac36a7b7b69",
    "servicePrincipalType": "Application",
    "keyCredentials": [],
    "passwordCredentials": [],
    "OnlySingleFactor": [{
    "customKeyIdentifier": null,
    "endDateTime": "2023-10-20T06:54:59.2014093Z",
    "keyId": "7df44f81-a52c-4fd6-b704-4b046771f85a",
    "startDateTime": "2021-10-20T06:54:59.2014093Z",
    "secretText": null,
    "hint": nu ll,
    "displayName": null
    }],
    "StrongSingleFactor": []
    }
    }

    Contributing

    Following methods work for contributing for the time being:

    1. Submit a pull request with code / documentation change
    2. Submit a issue
      • issue can be a:
      • ⚠️Problem (issue)
      • Feature request
      • ❔Question

    Other

    1. By default EAST tries to work with the current depedencies - Introducing new (direct) depedencies is not directly encouraged with EAST. If such vital depedency is introduced, then review licensing of such depedency, and update readme.md - depedencies
      • There is nothing to prevent you from creating your own fork of EAST with your own depedencies


    ❌