Server IP : 80.87.202.40 / Your IP : 216.73.216.169 Web Server : Apache System : Linux rospirotorg.ru 5.14.0-539.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Dec 5 22:26:13 UTC 2024 x86_64 User : bitrix ( 600) PHP Version : 8.2.27 Disable Function : NONE MySQL : OFF | cURL : ON | WGET : ON | Perl : ON | Python : OFF | Sudo : ON | Pkexec : ON Directory : /lib64/python3.9/site-packages/mercurial/__pycache__/ |
Upload File : |
a �+�bH � @ s� d Z ddlmZ ddlZddlZddlmZ ddlmZ ddl m Z mZmZ ddd �Z ddd�ZG d d� de�ZeZejdded�Zddd�ZdS )a; Algorithm works in the following way. You have two repository: local and remote. They both contains a DAG of changelists. The goal of the discovery protocol is to find one set of node *common*, the set of nodes shared by local and remote. One of the issue with the original protocol was latency, it could potentially require lots of roundtrips to discover that the local repo was a subset of remote (which is a very common case, you usually have few changes compared to upstream, while upstream probably had lots of development). The new protocol only requires one interface for the remote repo: `known()`, which given a set of changelists tells you if they are present in the DAG. The algorithm then works as follow: - We will be using three sets, `common`, `missing`, `unknown`. Originally all nodes are in `unknown`. - Take a sample from `unknown`, call `remote.known(sample)` - For each node that remote knows, move it and all its ancestors to `common` - For each node that remote doesn't know, move it and all its descendants to `missing` - Iterate until `unknown` is empty There are a couple optimizations, first is instead of starting with a random sample of missing, start by sending all heads, in the case where the local repo is a subset, you computed the answer in one round trip. Then you can do something similar to the bisecting strategy used when finding faulty changesets. Instead of random samples, you can try picking nodes that will maximize the number of nodes that will be classified with it (since all ancestors or descendants will be marked as well). � )�absolute_importN� )�_��nullrev)�error�policy�utilc C s� i }t �|�}t� }d}|r�|�� } | |v r.q|�| d�} | |krJ|d9 }| |krp|�| � |rpt|�|krpdS |�| � || �D ]2}|tkr�| r�|| v r�|�|| d � |�|� q�qdS )a[ update an existing sample to match the expected size The sample is updated with revs exponentially distant from each head of the <revs> set. (H~1, H~2, H~4, H~8, etc). If a target size is specified, the sampling will stop once this size is reached. Otherwise sampling will happen until roots of the <revs> set are reached. :revs: set of revs we want to discover (if None, assume the whole dag) :heads: set of DAG head revs :sample: a sample to update :parentfn: a callable to resolve parents for a revision :quicksamplesize: optional target size of the sampler � N) �collections�deque�set�popleft� setdefault�add�lenr �append)�revsZheads�sampleZparentfn�quicksamplesizeZdistZvisit�seenZfactorZcurr�d�p� r �</usr/lib64/python3.9/site-packages/mercurial/setdiscovery.py� _updatesample9 s( r Tc C sD t | �|kr| S |r$tt�| |��S t| �} | �� t| d|� �S )z�return a random subset of sample of at most desiredlen item. If randomize is False, though, a deterministic subset is returned. This is meant for integration tests. N)r r �randomr �list�sort)r Z desiredlen� randomizer r r �_limitsample_ s r c @ s~ e Zd ZdZddd�Zdd� Zdd� Zd d � Zdd� Zd d� Z e dd� �Zdd� Zdd� Z dd� Zdd� Zdd� Zdd� ZdS )�partialdiscoveryab an object representing ongoing discovery Feed with data from the remote repository, this object keep track of the current set of changeset in various states: - common: revs also known remotely - undecided: revs we don't have information on yet - missing: revs missing remotely (all tracked revisions are known locally) Tc C s<